title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 4. sVirt
Chapter 4. sVirt 4.1. Introduction Since virtual machines under KVM are implemented as Linux processes, KVM leverages the standard Linux security model to provide isolation and resource controls. The Linux kernel includes SELinux (Security-Enhanced Linux), a project developed by the US National Security Agency to add mandatory access control (MAC), multi-level security (MLS) and multi-category security (MCS) through a flexible and customizable security policy. SELinux provides strict resource isolation and confinement for processes running on top of the Linux kernel, including virtual machine processes. The sVirt project builds upon SELinux to further facilitate virtual machine isolation and controlled sharing. For example, fine-grained permissions could be applied to group virtual machines together to share resources. From a security point of view, the hypervisor is a tempting target for attackers, as a compromised hypervisor could lead to the compromise of all virtual machines running on the host system. Integrating SELinux into virtualization technologies helps improve hypervisor security against malicious virtual machines trying to gain access to the host system or other virtual machines. Refer to the following image which represents isolated guests, limiting the ability for a compromised hypervisor (or guest) to launch further attacks, or to extend to another instance: Figure 4.1. Attack path isolated by SELinux Note For more information on SELinux, refer to Red Hat Enterprise Linux Security-Enhanced Linux .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/chap-virtualization_security_guide-svirt
Chapter 6. File Integrity Operator
Chapter 6. File Integrity Operator 6.1. File Integrity Operator overview The File Integrity Operator continually runs file integrity checks on the cluster nodes. It deploys a DaemonSet that initializes and runs privileged Advanced Intrusion Detection Environment (AIDE) containers on each node, providing a log of files that have been modified since the initial run of the DaemonSet pods. For the latest updates, see the File Integrity Operator release notes . Installing the File Integrity Operator Updating the File Integrity Operator Understanding the File Integrity Operator Configuring the Custom File Integrity Operator Performing advanced Custom File Integrity Operator tasks Troubleshooting the File Integrity Operator 6.2. File Integrity Operator release notes The File Integrity Operator for OpenShift Container Platform continually runs file integrity checks on RHCOS nodes. These release notes track the development of the File Integrity Operator in the OpenShift Container Platform. For an overview of the File Integrity Operator, see Understanding the File Integrity Operator . To access the latest release, see Updating the File Integrity Operator . 6.2.1. OpenShift File Integrity Operator 1.3.5 The following advisory is available for the OpenShift File Integrity Operator 1.3.5: RHBA-2024:10366 OpenShift File Integrity Operator Update This update includes upgraded dependencies in underlying base images. 6.2.2. OpenShift File Integrity Operator 1.3.4 The following advisory is available for the OpenShift File Integrity Operator 1.3.4: RHBA-2024:2946 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.2.1. Bug fixes Previously, File Integrity Operator would issue a NodeHasIntegrityFailure alert due to multus certificate rotation. With this release, the alert and failing status are now correctly triggered. ( OCPBUGS-31257 ) 6.2.3. OpenShift File Integrity Operator 1.3.3 The following advisory is available for the OpenShift File Integrity Operator 1.3.3: RHBA-2023:5652 OpenShift File Integrity Operator Bug Fix and Enhancement Update This update addresses a CVE in an underlying dependency. 6.2.3.1. New features and enhancements You can install and use the File Integrity Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see ( Installing the system in FIPS mode ) 6.2.3.2. Bug fixes Previously, some FIO pods with private default mount propagation in combination with hostPath: path: / volume mounts would break the CSI driver relying on multipath. This problem has been fixed and the CSI driver works correctly. ( Some OpenShift Operator pods blocking unmounting of CSI volumes when multipath is in use ) This update resolves CVE-2023-39325. ( CVE-2023-39325 ) 6.2.4. OpenShift File Integrity Operator 1.3.2 The following advisory is available for the OpenShift File Integrity Operator 1.3.2: RHBA-2023:5107 OpenShift File Integrity Operator Bug Fix Update This update addresses a CVE in an underlying dependency. 6.2.5. OpenShift File Integrity Operator 1.3.1 The following advisory is available for the OpenShift File Integrity Operator 1.3.1: RHBA-2023:3600 OpenShift File Integrity Operator Bug Fix Update 6.2.5.1. New features and enhancements FIO now includes kubelet certificates as default files, excluding them from issuing warnings when they're managed by OpenShift Container Platform. ( OCPBUGS-14348 ) FIO now correctly directs email to the address for Red Hat Technical Support. ( OCPBUGS-5023 ) 6.2.5.2. Bug fixes Previously, FIO would not clean up FileIntegrityNodeStatus CRDs when nodes are removed from the cluster. FIO has been updated to correctly clean up node status CRDs on node removal. ( OCPBUGS-4321 ) Previously, FIO would also erroneously indicate that new nodes failed integrity checks. FIO has been updated to correctly show node status CRDs when adding new nodes to the cluster. This provides correct node status notifications. ( OCPBUGS-8502 ) Previously, when FIO was reconciling FileIntegrity CRDs, it would pause scanning until the reconciliation was done. This caused an overly aggressive re-initiatization process on nodes not impacted by the reconciliation. This problem also resulted in unnecessary daemonsets for machine config pools which are unrelated to the FileIntegrity being changed. FIO correctly handles these cases and only pauses AIDE scanning for nodes that are affected by file integrity changes. ( CMP-1097 ) 6.2.5.3. Known Issues In FIO 1.3.1, increasing nodes in IBM Z(R) clusters might result in Failed File Integrity node status. For more information, see Adding nodes in IBM Power(R) clusters can result in failed File Integrity node status . 6.2.6. OpenShift File Integrity Operator 1.2.1 The following advisory is available for the OpenShift File Integrity Operator 1.2.1: RHBA-2023:1684 OpenShift File Integrity Operator Bug Fix Update This release includes updated container dependencies. 6.2.7. OpenShift File Integrity Operator 1.2.0 The following advisory is available for the OpenShift File Integrity Operator 1.2.0: RHBA-2023:1273 OpenShift File Integrity Operator Enhancement Update 6.2.7.1. New features and enhancements The File Integrity Operator Custom Resource (CR) now contains an initialDelay feature that specifies the number of seconds to wait before starting the first AIDE integrity check. For more information, see Creating the FileIntegrity custom resource . The File Integrity Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the File Integrity Operator . 6.2.8. OpenShift File Integrity Operator 1.0.0 The following advisory is available for the OpenShift File Integrity Operator 1.0.0: RHBA-2023:0037 OpenShift File Integrity Operator Bug Fix Update 6.2.9. OpenShift File Integrity Operator 0.1.32 The following advisory is available for the OpenShift File Integrity Operator 0.1.32: RHBA-2022:7095 OpenShift File Integrity Operator Bug Fix Update 6.2.9.1. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand from which namespace the alert originated. Now, the Operator sets the appropriate namespace, providing more information about the alert. ( BZ#2112394 ) Previously, The File Integrity Operator did not update the metrics service on Operator startup, causing the metrics targets to be unreachable. With this release, the File Integrity Operator now ensures the metrics service is updated on Operator startup. ( BZ#2115821 ) 6.2.10. OpenShift File Integrity Operator 0.1.30 The following advisory is available for the OpenShift File Integrity Operator 0.1.30: RHBA-2022:5538 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.10.1. New features and enhancements The File Integrity Operator is now supported on the following architectures: IBM Power(R) IBM Z(R) and IBM(R) LinuxONE 6.2.10.2. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand where the alert originated. Now, the Operator sets the appropriate namespace, increasing understanding of the alert. ( BZ#2101393 ) 6.2.11. OpenShift File Integrity Operator 0.1.24 The following advisory is available for the OpenShift File Integrity Operator 0.1.24: RHBA-2022:1331 OpenShift File Integrity Operator Bug Fix 6.2.11.1. New features and enhancements You can now configure the maximum number of backups stored in the FileIntegrity Custom Resource (CR) with the config.maxBackups attribute. This attribute specifies the number of AIDE database and log backups left over from the re-init process to keep on the node. Older backups beyond the configured number are automatically pruned. The default is set to five backups. 6.2.11.2. Bug fixes Previously, upgrading the Operator from versions older than 0.1.21 to 0.1.22 could cause the re-init feature to fail. This was a result of the Operator failing to update configMap resource labels. Now, upgrading to the latest version fixes the resource labels. ( BZ#2049206 ) Previously, when enforcing the default configMap script contents, the wrong data keys were compared. This resulted in the aide-reinit script not being updated properly after an Operator upgrade, and caused the re-init process to fail. Now, daemonSets run to completion and the AIDE database re-init process executes successfully. ( BZ#2072058 ) 6.2.12. OpenShift File Integrity Operator 0.1.22 The following advisory is available for the OpenShift File Integrity Operator 0.1.22: RHBA-2022:0142 OpenShift File Integrity Operator Bug Fix 6.2.12.1. Bug fixes Previously, a system with a File Integrity Operator installed might interrupt the OpenShift Container Platform update, due to the /etc/kubernetes/aide.reinit file. This occurred if the /etc/kubernetes/aide.reinit file was present, but later removed prior to the ostree validation. With this update, /etc/kubernetes/aide.reinit is moved to the /run directory so that it does not conflict with the OpenShift Container Platform update. ( BZ#2033311 ) 6.2.13. OpenShift File Integrity Operator 0.1.21 The following advisory is available for the OpenShift File Integrity Operator 0.1.21: RHBA-2021:4631 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.13.1. New features and enhancements The metrics related to FileIntegrity scan results and processing metrics are displayed on the monitoring dashboard on the web console. The results are labeled with the prefix of file_integrity_operator_ . If a node has an integrity failure for more than 1 second, the default PrometheusRule provided in the operator namespace alerts with a warning. The following dynamic Machine Config Operator and Cluster Version Operator related filepaths are excluded from the default AIDE policy to help prevent false positives during node updates: /etc/machine-config-daemon/currentconfig /etc/pki/ca-trust/extracted/java/cacerts /etc/cvo/updatepayloads /root/.kube The AIDE daemon process has stability improvements over v0.1.16, and is more resilient to errors that might occur when the AIDE database is initialized. 6.2.13.2. Bug fixes Previously, when the Operator automatically upgraded, outdated daemon sets were not removed. With this release, outdated daemon sets are removed during the automatic upgrade. 6.2.14. Additional resources Understanding the File Integrity Operator 6.3. File Integrity Operator support 6.3.1. File Integrity Operator lifecycle The File Integrity Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 6.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 6.4. Installing the File Integrity Operator 6.4.1. Installing the File Integrity Operator using the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the File Integrity Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-file-integrity namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-file-integrity namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-file-integrity project that are reporting issues. 6.4.2. Installing the File Integrity Operator using the CLI Prerequisites You must have admin privileges. Procedure Create a Namespace object YAML file by running: USD oc create -f <file-name>.yaml Example output apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity 1 In OpenShift Container Platform 4.14, the pod security label must be set to privileged at the namespace level. Create the OperatorGroup object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity Create the Subscription object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: "stable" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-file-integrity Verify that the File Integrity Operator is up and running: USD oc get deploy -n openshift-file-integrity 6.4.3. Additional resources The File Integrity Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.5. Updating the File Integrity Operator As a cluster administrator, you can update the File Integrity Operator on your OpenShift Container Platform cluster. 6.5.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 6.5.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Update channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 6.5.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.6. Understanding the File Integrity Operator The File Integrity Operator is an OpenShift Container Platform Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods. Important Currently, only Red Hat Enterprise Linux CoreOS (RHCOS) nodes are supported. 6.6.1. Creating the FileIntegrity custom resource An instance of a FileIntegrity custom resource (CR) represents a set of continuous file integrity scans for one or more nodes. Each FileIntegrity CR is backed by a daemon set running AIDE on the nodes matching the FileIntegrity CR specification. Procedure Create the following example FileIntegrity CR named worker-fileintegrity.yaml to enable scans on worker nodes: Example FileIntegrity CR apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: "" tolerations: 2 - key: "myNode" operator: "Exists" effect: "NoSchedule" config: 3 name: "myconfig" namespace: "openshift-file-integrity" key: "config" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7 1 Defines the selector for scheduling node scans. 2 Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration allowing running on main and infra nodes is applied. 3 Define a ConfigMap containing an AIDE configuration to use. 4 The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node might be resource intensive, so it can be useful to specify a longer interval. Default is 900 seconds (15 minutes). 5 The maximum number of AIDE database and log backups (leftover from the re-init process) to keep on a node. Older backups beyond this number are automatically pruned by the daemon. Default is set to 5. 6 The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. 7 The running status of the FileIntegrity instance. Statuses are Initializing , Pending , or Active . Initializing The FileIntegrity object is currently initializing or re-initializing the AIDE database. Pending The FileIntegrity deployment is still being created. Active The scans are active and ongoing. Apply the YAML file to the openshift-file-integrity namespace: USD oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity Verification Confirm the FileIntegrity object was created successfully by running the following command: USD oc get fileintegrities -n openshift-file-integrity Example output NAME AGE worker-fileintegrity 14s 6.6.2. Checking the FileIntegrity custom resource status The FileIntegrity custom resource (CR) reports its status through the . status.phase subresource. Procedure To query the FileIntegrity CR status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status.phase }" Example output Active 6.6.3. FileIntegrity custom resource phases Pending - The phase after the custom resource (CR) is created. Active - The phase when the backing daemon set is up and running. Initializing - The phase when the AIDE database is being reinitialized. 6.6.4. Understanding the FileIntegrityNodeStatuses object The scan results of the FileIntegrity CR are reported in another object called FileIntegrityNodeStatuses . USD oc get fileintegritynodestatuses Example output NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s Note It might take some time for the FileIntegrityNodeStatus object results to be available. There is one result object per node. The nodeName attribute of each FileIntegrityNodeStatus object corresponds to the node being scanned. The status of the file integrity scan is represented in the results array, which holds scan conditions. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq The fileintegritynodestatus object reports the latest status of an AIDE run and exposes the status as Failed , Succeeded , or Errored in a status field. USD oc get fileintegritynodestatuses -w Example output NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded 6.6.5. FileIntegrityNodeStatus CR status types These conditions are reported in the results array of the corresponding FileIntegrityNodeStatus CR status: Succeeded - The integrity check passed; the files and directories covered by the AIDE check have not been modified since the database was last initialized. Failed - The integrity check failed; some files or directories covered by the AIDE check have been modified since the database was last initialized. Errored - The AIDE scanner encountered an internal error. 6.6.5.1. FileIntegrityNodeStatus CR success example Example output of a condition with a success status [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:57Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:46:03Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:48Z" } ] In this case, all three scans succeeded and so far there are no other conditions. 6.6.5.2. FileIntegrityNodeStatus CR failure status example To simulate a failure condition, modify one of the files AIDE tracks. For example, modify /etc/resolv.conf on one of the worker nodes: USD oc debug node/ip-10-0-130-192.ec2.internal Example output Creating debug namespace/openshift-debug-node-ldfbj ... Starting pod/ip-10-0-130-192ec2internal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo "# integrity test" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod ... Removing debug namespace/openshift-debug-node-ldfbj ... After some time, the Failed condition is reported in the results array of the corresponding FileIntegrityNodeStatus object. The Succeeded condition is retained, which allows you to pinpoint the time the check failed. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r Alternatively, if you are not mentioning the object name, run: USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq Example output [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:54:14Z" }, { "condition": "Failed", "filesChanged": 1, "lastProbeTime": "2020-09-15T12:57:20Z", "resultConfigMapName": "aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "resultConfigMapNamespace": "openshift-file-integrity" } ] The Failed condition points to a config map that gives more details about what exactly failed and why: USD oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Example output Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none> Due to the config map data size limit, AIDE logs over 1 MB are added to the failure config map as a base64-encoded gzip archive. Use the following command to extract the log: USD oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip Note Compressed logs are indicated by the presence of a file-integrity.openshift.io/compressed annotation key in the config map. 6.6.6. Understanding events Transitions in the status of the FileIntegrity and FileIntegrityNodeStatus objects are logged by events . The creation time of the event reflects the latest transition, such as Initializing to Active , and not necessarily the latest scan result. However, the newest event always reflects the most recent status. USD oc get events --field-selector reason=FileIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active When a node scan fails, an event is created with the add/changed/removed and config map information. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed Changes to the number of added, changed, or removed files results in a new event, even if the status of the node has not transitioned. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 6.7. Configuring the Custom File Integrity Operator 6.7.1. Viewing FileIntegrity object attributes As with any Kubernetes custom resources (CRs), you can run oc explain fileintegrity , and then look at the individual attributes using: USD oc explain fileintegrity.spec USD oc explain fileintegrity.spec.config 6.7.2. Important attributes Table 6.1. Important spec and spec.config attributes Attribute Description spec.nodeSelector A map of key-values pairs that must match with node's labels in order for the AIDE pods to be schedulable on that node. The typical use is to set only a single key-value pair where node-role.kubernetes.io/worker: "" schedules AIDE on all worker nodes, node.openshift.io/os_id: "rhcos" schedules on all Red Hat Enterprise Linux CoreOS (RHCOS) nodes. spec.debug A boolean attribute. If set to true , the daemon running in the AIDE deamon set's pods would output extra information. spec.tolerations Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration is applied, which allows tolerations to run on control plane nodes. spec.config.gracePeriod The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node can be resource intensive, so it can be useful to specify a longer interval. Defaults to 900 , or 15 minutes. maxBackups The maximum number of AIDE database and log backups leftover from the re-init process to keep on a node. Older backups beyond this number are automatically pruned by the daemon. spec.config.name Name of a configMap that contains custom AIDE configuration. If omitted, a default configuration is created. spec.config.namespace Namespace of a configMap that contains custom AIDE configuration. If unset, the FIO generates a default configuration suitable for RHCOS systems. spec.config.key Key that contains actual AIDE configuration in a config map specified by name and namespace . The default value is aide.conf . spec.config.initialDelay The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. This attribute is optional. 6.7.3. Examine the default configuration The default File Integrity Operator configuration is stored in a config map with the same name as the FileIntegrity CR. Procedure To examine the default config, run: USD oc describe cm/worker-fileintegrity 6.7.4. Understanding the default File Integrity Operator configuration Below is an excerpt from the aide.conf key of the config map: @@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\..* PERMS /hostroot/root/ CONTENT_EX The default configuration for a FileIntegrity instance provides coverage for files under the following directories: /root /boot /usr /etc The following directories are not covered: /var /opt Some OpenShift Container Platform-specific excludes under /etc/ 6.7.5. Supplying a custom AIDE configuration Any entries that configure AIDE internal behavior such as DBDIR , LOGDIR , database , and database_out are overwritten by the Operator. The Operator would add a prefix to /hostroot/ before all paths to be watched for integrity changes. This makes reusing existing AIDE configs that might often not be tailored for a containerized environment and start from the root directory easier. Note /hostroot is the directory where the pods running AIDE mount the host's file system. Changing the configuration triggers a reinitializing of the database. 6.7.6. Defining a custom File Integrity Operator configuration This example focuses on defining a custom configuration for a scanner that runs on the control plane nodes based on the default configuration provided for the worker-fileintegrity CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under /opt/mydaemon on the control plane nodes. Procedure Make a copy of the default configuration. Edit the default configuration with the files that must be watched or excluded. Store the edited contents in a new config map. Point the FileIntegrity object to the new config map through the attributes in spec.config . Extract the default configuration: USD oc extract cm/worker-fileintegrity --keys=aide.conf This creates a file named aide.conf that you can edit. To illustrate how the Operator post-processes the paths, this example adds an exclude directory without the prefix: USD vim aide.conf Example output /hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db Exclude a path specific to control plane nodes: !/opt/mydaemon/ Store the other content in /etc : /hostroot/etc/ CONTENT_EX Create a config map based on this file: USD oc create cm master-aide-conf --from-file=aide.conf Define a FileIntegrity CR manifest that references the config map: apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: "" config: name: master-aide-conf namespace: openshift-file-integrity The Operator processes the provided config map file and stores the result in a config map with the same name as the FileIntegrity object: USD oc describe cm/master-fileintegrity | grep /opt/mydaemon Example output !/hostroot/opt/mydaemon 6.7.7. Changing the custom File Integrity configuration To change the File Integrity configuration, never change the generated config map. Instead, change the config map that is linked to the FileIntegrity object through the spec.name , namespace , and key attributes. 6.8. Performing advanced Custom File Integrity Operator tasks 6.8.1. Reinitializing the database If the File Integrity Operator detects a change that was planned, it might be required to reinitialize the database. Procedure Annotate the FileIntegrity custom resource (CR) with file-integrity.openshift.io/re-init : USD oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init= The old database and log files are backed up and a new database is initialized. The old database and logs are retained on the nodes under /etc/kubernetes , as seen in the following output from a pod spawned using oc debug : Example output ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55 To provide some permanence of record, the resulting config maps are not owned by the FileIntegrity object, so manual cleanup is necessary. As a result, any integrity failures would still be visible in the FileIntegrityNodeStatus object. 6.8.2. Machine config integration In OpenShift Container Platform 4, the cluster node configuration is delivered through MachineConfig objects. You can assume that the changes to files that are caused by a MachineConfig object are expected and should not cause the file integrity scan to fail. To suppress changes to files caused by MachineConfig object updates, the File Integrity Operator watches the node objects; when a node is being updated, the AIDE scans are suspended for the duration of the update. When the update finishes, the database is reinitialized and the scans resume. This pause and resume logic only applies to updates through the MachineConfig API, as they are reflected in the node object annotations. 6.8.3. Exploring the daemon sets Each FileIntegrity object represents a scan on a number of nodes. The scan itself is performed by pods managed by a daemon set. To find the daemon set that represents a FileIntegrity object, run: USD oc -n openshift-file-integrity get ds/aide-worker-fileintegrity To list the pods in that daemon set, run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity To view logs of a single AIDE pod, call oc logs on one of the pods. USD oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6 Example output Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check ... The config maps created by the AIDE daemon are not retained and are deleted after the File Integrity Operator processes them. However, on failure and error, the contents of these config maps are copied to the config map that the FileIntegrityNodeStatus object points to. 6.9. Troubleshooting the File Integrity Operator 6.9.1. General troubleshooting Issue You want to generally troubleshoot issues with the File Integrity Operator. Resolution Enable the debug flag in the FileIntegrity object. The debug flag increases the verbosity of the daemons that run in the DaemonSet pods and run the AIDE checks. 6.9.2. Checking the AIDE configuration Issue You want to check the AIDE configuration. Resolution The AIDE configuration is stored in a config map with the same name as the FileIntegrity object. All AIDE configuration config maps are labeled with file-integrity.openshift.io/aide-conf . 6.9.3. Determining the FileIntegrity object's phase Issue You want to determine if the FileIntegrity object exists and see its current status. Resolution To see the FileIntegrity object's current status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status }" Once the FileIntegrity object and the backing daemon set are created, the status should switch to Active . If it does not, check the Operator pod logs. 6.9.4. Determining that the daemon set's pods are running on the expected nodes Issue You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on. Resolution Run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity Note Adding -owide includes the IP address of the node that the pod is running on. To check the logs of the daemon pods, run oc logs . Check the return value of the AIDE command to see if the check passed or failed.
[ "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-file-integrity", "oc get deploy -n openshift-file-integrity", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7", "oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity", "oc get fileintegrities -n openshift-file-integrity", "NAME AGE worker-fileintegrity 14s", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"", "Active", "oc get fileintegritynodestatuses", "NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "oc get fileintegritynodestatuses -w", "NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]", "oc debug node/ip-10-0-130-192.ec2.internal", "Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj", "oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]", "oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>", "oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip", "oc get events --field-selector reason=FileIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc explain fileintegrity.spec", "oc explain fileintegrity.spec.config", "oc describe cm/worker-fileintegrity", "@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX", "oc extract cm/worker-fileintegrity --keys=aide.conf", "vim aide.conf", "/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db", "!/opt/mydaemon/", "/hostroot/etc/ CONTENT_EX", "oc create cm master-aide-conf --from-file=aide.conf", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity", "oc describe cm/master-fileintegrity | grep /opt/mydaemon", "!/hostroot/opt/mydaemon", "oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=", "ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55", "oc -n openshift-file-integrity get ds/aide-worker-fileintegrity", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6", "Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/file-integrity-operator
3.3.6. Converting a virtual machine running Windows
3.3.6. Converting a virtual machine running Windows This example demonstrates converting a local (libvirt-managed) Xen virtual machine running Windows for output to Red Hat Enterprise Virtualization. Ensure that the virtual machine's XML is available locally, and that the storage referred to in the XML is available locally at the same paths. To convert the guest virtual machine from an XML file, run: Where guest_name.xml is the path to the virtual machine's exported XML, and storage.example.com:/exportdomain is the export storage domain. You may also use the --network parameter to connect to a locally managed network if your virtual machine only has a single network interface. If your virtual machine has multiple network interfaces, edit /etc/virt-v2v.conf to specify the network mapping for all interfaces. If your virtual machine uses a Xen paravirtualized kernel (it would be called something like kernel-xen or kernel-xenU ), virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by installing a regular kernel, which will not reference a hypervisor in its name, alongside the Xen kernel prior to conversion. You should not make this newly installed kernel your default kernel, because Xen will not boot it. virt-v2v will make it the default during conversion.
[ "virt-v2v -i libvirtxml -o rhev -osd storage.example.com:/exportdomain --network rhevm guest_name.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/subsect-convert-a-local-windows-xen-virtual-machine
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/net/9.0/html/release_notes_for_.net_9.0_rpm_packages/proc_providing-feedback-on-red-hat-documentation_release-notes-for-dotnet-rpms
Chapter 12. KafkaListenerAuthenticationCustom schema reference
Chapter 12. KafkaListenerAuthenticationCustom schema reference Used in: GenericKafkaListener Full list of KafkaListenerAuthenticationCustom schema properties To configure custom authentication, set the type property to custom . Custom authentication allows for any type of kafka-supported authentication to be used. Example custom OAuth authentication configuration spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example A protocol map is generated that uses the sasl and tls values to determine which protocol to map to the listener. SASL = True, TLS = True SASL_SSL SASL = False, TLS = True SSL SASL = True, TLS = False SASL_PLAINTEXT SASL = False, TLS = False PLAINTEXT 12.1. listenerConfig Listener configuration specified using listenerConfig is prefixed with listener.name. <listener_name>-<port> . For example, sasl.enabled.mechanisms becomes listener.name. <listener_name>-<port> .sasl.enabled.mechanisms . 12.2. secrets Secrets are mounted to /opt/kafka/custom-authn-secrets/custom-listener- <listener_name>-<port> / <secret_name> in the Kafka broker nodes' containers. For example, the mounted secret ( example ) in the example configuration would be located at /opt/kafka/custom-authn-secrets/custom-listener-oauth-bespoke-9093/example . 12.3. Principal builder You can set a custom principal builder in the Kafka cluster configuration. However, the principal builder is subject to the following requirements: The specified principal builder class must exist on the image. Before building your own, check if one already exists. You'll need to rebuild the AMQ Streams images with the required classes. No other listener is using oauth type authentication. This is because an OAuth listener appends its own principle builder to the Kafka configuration. The specified principal builder is compatible with AMQ Streams. Custom principal builders must support peer certificates for authentication, as AMQ Streams uses these to manage the Kafka cluster. Note Kafka's default principal builder class supports the building of principals based on the names of peer certificates. The custom principal builder should provide a principal of type user using the name of the SSL peer certificate. The following example shows a custom principal builder that satisfies the OAuth requirements of AMQ Streams. Example principal builder for custom OAuth configuration public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException("Cannot use an unverified peer for authentication", e); } } // Create your own KafkaPrincipal here ... } } 12.4. KafkaListenerAuthenticationCustom schema properties The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationCustom type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth . It must have the value custom for the type KafkaListenerAuthenticationCustom . Property Description listenerConfig Configuration to be used for a specific listener. All values are prefixed with listener.name. <listener_name> . map sasl Enable or disable SASL on this listener. boolean secrets Secrets to be mounted to /opt/kafka/custom-authn-secrets/custom-listener- <listener_name>-<port> / <secret_name> . GenericSecretSource array type Must be custom . string
[ "spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example", "public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException(\"Cannot use an unverified peer for authentication\", e); } } // Create your own KafkaPrincipal here } }" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaListenerAuthenticationCustom-reference
Chapter 3. Metro-DR solution for OpenShift Data Foundation
Chapter 3. Metro-DR solution for OpenShift Data Foundation This section of the guide provides insights into the Metropolitan Disaster Recovery (Metro-DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the OpenShift Container Platform clusters of less than 10ms RTT latency. The persistent storage for applications is provided by an external Red Hat Ceph Storage (RHCS) cluster stretched between the two locations with the OpenShift Container Platform instances connected to this storage cluster. An arbiter node with a storage monitor service is required at a third location (different location than where OpenShift Container Platform instances are deployed) to establish quorum for the RHCS cluster in the case of a site outage. This third location can be in the range of ~100ms RTT from the storage cluster connected to the OpenShift Container Platform instances. This is a general overview of the Metro DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster. Important You can now easily set up Metro-DR to protect your workloads on OpenShift virtualization using OpenShift Data Foundation. For more information, see Knowledgebase article . This is a Technology Preview feature and is subject to Technology Preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.1. Components of Metro-DR solution Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane. Managed clusters: components that run on the clusters that are managed. For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . Red Hat Ceph Storage Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments. For more product information, see Red Hat Ceph Storage . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift DR OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships. OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 3.2. Metro-DR deployment workflow This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using the latest versions of Red Hat OpenShift Data Foundation, Red Hat Ceph Storage (RHCS) and Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.8 or later, across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management. To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Metro-DR . Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage . Deploy and configure Red Hat Ceph Storage stretch mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Deploying Red Hat Ceph Storage . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing OpenShift Data Foundation Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note The Metro-DR solution can only have one DRpolicy. Testing your disaster recovery solution with: Subscription-based application: Create sample applications. See Creating a sample Subscription-based application . Test failover and relocate operations using the sample application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . 3.3. Requirements for enabling Metro-DR The prerequisites to installing a disaster recovery solution supported by Red Hat OpenShift Data Foundation are as follows: You must have the following OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management (RHACM) for Kubernetes operator is installed. Primary managed cluster where OpenShift Data Foundation is running. Secondary managed cluster where OpenShift Data Foundation is running. Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important Ensure that application traffic routing and redirection are configured appropriately. On the Hub cluster Navigate to All Clusters Infrastructure Clusters . Import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Warning The Openshift Container Platform managed clusters and the Red Hat Ceph Storage (RHCS) nodes have distance limitations. The network latency between the sites must be below 10 milliseconds round-trip time (RTT). 3.4. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it. This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for Red Hat Ceph Storage 6.1 . Note Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss. Important Erasure coded pools cannot be used with stretch mode. 3.4.1. Hardware requirements For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph . Table 3.1. Physical server locations and Ceph component layout for Red Hat Ceph Storage cluster deployment: Node name Datacenter Ceph components ceph1 DC1 OSD+MON+MGR ceph2 DC1 OSD+MON ceph3 DC1 OSD+MDS+RGW ceph4 DC2 OSD+MON+MGR ceph5 DC2 OSD+MON ceph6 DC2 OSD+MDS+RGW ceph7 DC3 MON 3.4.2. Software requirements Use the latest software version of Red Hat Ceph Storage 6.1 . For more information on the supported Operating System versions for Red Hat Ceph Storage, see Compatibility Matrix for Red Hat Ceph Storage . 3.4.3. Network configuration requirements The recommended Red Hat Ceph Storage configuration is as follows: You must have two separate networks, one public network and one private network. You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters. Note You can use different subnets for each of the datacenters. The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high up to 100 ms RTT to the other two OSD datacenters. Here is an example of a basic network configuration that we have used in this guide: DC1: Ceph public/private network: 10.0.40.0/24 DC2: Ceph public/private network: 10.0.40.0/24 DC3: Ceph public/private network: 10.0.40.0/24 For more information on the required network environment, see Ceph network configuration . 3.5. Deploying Red Hat Ceph Storage 3.5.1. Node pre-deployment steps Before installing the Red Hat Ceph Storage Ceph cluster, perform the following steps to fulfill all the requirements needed. Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool: subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0 Enable access for all the nodes in the Ceph cluster for the following repositories: rhel9-for-x86_64-baseos-rpms rhel9-for-x86_64-appstream-rpms subscription-manager repos --disable="*" --enable="rhel9-for-x86_64-baseos-rpms" --enable="rhel9-for-x86_64-appstream-rpms" Update the operating system RPMs to the latest version and reboot if needed: dnf update -y reboot Select a node from the cluster to be your bootstrap node. ceph1 is our bootstrap node in this example going forward. Only on the bootstrap node ceph1 , enable the ansible-2.9-for-rhel-9-x86_64-rpms and rhceph-6-tools-for-rhel-9-x86_64-rpms repositories: subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-6-tools-for-rhel-9-x86_64-rpms" Configure the hostname using the bare/short hostname in all the hosts. hostnamectl set-hostname <short_name> Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm. USD hostname Example output: Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name. Check the long hostname with the fqdn using the hostname -f option. USD hostname -f Example output: Note To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names . Run the following steps on the bootstrap node. In our example, the bootstrap node is ceph1 . Install the cephadm-ansible RPM package: USD sudo dnf install -y cephadm-ansible Important To run the ansible playbooks, you must have ssh passwordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example, deployment-user ) has root privileges to invoke the sudo command without needing a password. To use a custom key, configure the selected user (for example, deployment-user ) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh: cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF Build the ansible inventory cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF Note Here, the Hosts ( Ceph1 and Ceph4 ) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as _admin by cephadm . Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node. Verify that ansible can access all nodes using the ping module before running the pre-flight playbook. USD ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b Example output: Navigate to the /usr/share/cephadm-ansible directory. Run ansible-playbook with relative file paths. USD ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" The preflight playbook Ansible playbook configures the RHCS dnf repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . For additional information, see Running the preflight playbook 3.5.2. Cluster bootstrapping and service deployment with cephadm utility The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run. In this guide we are going to bootstrap the cluster and deploy all the needed Red Hat Ceph Storage services in one step using a cluster specification yaml file. If you find issues during the deployment, it may be easier to troubleshoot the errors by dividing the deployment into two steps: Bootstrap Service deployment Note For additional information on the bootstrapping process, see Bootstrapping a new storage cluster . Procedure Create json file to authenticate against the container registry using a json file as follows: USD cat <<EOF > /root/registry.json { "url":"registry.redhat.io", "username":"User", "password":"Pass" } EOF Create a cluster-spec.yaml that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run following table 3.1. cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: "mon" --- service_type: mds service_id: cephfs placement: label: "mds" --- service_type: mgr service_name: mgr placement: label: "mgr" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: "osd" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: "rgw" spec: rgw_frontend_port: 8080 EOF Retrieve the IP for the NIC with the Red Hat Ceph Storage public network configured from the bootstrap node. After substituting 10.0.40.0 with the subnet that you have defined in your ceph public network, execute the following command. USD ip a | grep 10.0.40 Example output: Run the cephadm bootstrap command as the root user on the node that will be the initial Monitor node in the cluster. The IP_ADDRESS option is the node's IP address that you are using to run the cephadm bootstrap command. Note If you have configured a different user instead of root for passwordless SSH access, then use the --ssh-user= flag with the cepadm bootstrap command. If you are using non default/id_rsa ssh key names, then use --ssh-private-key and --ssh-public-key options with cephadm command. USD cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json Important If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Once the bootstrap finishes, you will see the following output from the cephadm bootstrap command: You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/ Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1: USD ceph -s Example output: Note It may take several minutes for all the services to start. It is normal to get a global recovery event while you do not have any OSDs configured. You can use ceph orch ps and ceph orch ls to further check the status of the services. Verify if all the nodes are part of the cephadm cluster. USD ceph orch host ls Example output: Note You can run Ceph commands directly from the host because ceph1 was configured in the cephadm-ansible inventory as part of the [admin] group. The Ceph admin keys were copied to the host during the cephadm bootstrap process. Check the current placement of the Ceph monitor services on the datacenters. USD ceph orch ps | grep mon | awk '{print USD1 " " USD2}' Example output: Check the current placement of the Ceph manager services on the datacenters. Example output: Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is UP . Also, double-check that each node is under the right datacenter bucket as specified in table 3.1 USD ceph osd tree Example output: Create and enable a new RDB block pool. Note The number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator . Verify that the RBD pool has been created. Example output: Verify that MDS services are active and have located one service on each datacenter. Example output: Create the CephFS volume. USD ceph fs volume create cephfs Note The ceph fs volume create command also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems . Check the Ceph status to verify how the MDS daemons have been deployed. Ensure that the state is active where ceph6 is the primary MDS for this filesystem and ceph3 is the secondary MDS. USD ceph fs status Example output: Verify that RGW services are active. USD ceph orch ps | grep rgw Example output: 3.5.3. Configuring Red Hat Ceph Storage stretch mode Once the Red Hat Ceph Storage cluster is fully deployed using cephadm , use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case. Procedure Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic. ceph mon dump | grep election_strategy Example output: Change the monitor election to connectivity. ceph mon set election_strategy connectivity Run the ceph mon dump command again to verify the election_strategy value. USD ceph mon dump | grep election_strategy Example output: To know more about the different election strategies, see Configuring monitor election strategy . Set the location for all our Ceph monitors: ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3 Verify that each monitor has its appropriate location. USD ceph mon dump Example output: Create a CRUSH rule that makes use of this OSD crush topology by installing the ceph-base RPM package in order to use the crushtool command: USD dnf -y install ceph-base To know more about CRUSH ruleset, see Ceph CRUSH ruleset . Get the compiled CRUSH map from the cluster: USD ceph osd getcrushmap > /etc/ceph/crushmap.bin Decompile the CRUSH map and convert it to a text file in order to be able to edit it: USD crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt Add the following rule to the CRUSH map by editing the text file /etc/ceph/crushmap.txt at the end of the file. USD vim /etc/ceph/crushmap.txt This example is applicable for active applications in both OpenShift Container Platform clusters. Note The rule id has to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the free id. The CRUSH rule declared contains the following information: Rule name Description: A unique whole name for identifying the rule. Value: stretch_rule id Description: A unique whole number for identifying the rule. Value: 1 type Description: Describes a rule for either a storage drive replicated or erasure-coded. Value: replicated min_size Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule. Value: 1 max_size Description: If a pool makes more replicas than this number, CRUSH will not select this rule. Value: 10 step take default Description: Takes the root bucket called default , and begins iterating down the tree. step choose firstn 0 type datacenter Description: Selects the datacenter bucket, and goes into its subtrees. step chooseleaf firstn 2 type host Description: Selects the number of buckets of the given type. In this case, it is two different hosts located in the datacenter it entered at the level. step emit Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule. Compile the new CRUSH map from the file /etc/ceph/crushmap.txt and convert it to a binary file called /etc/ceph/crushmap2.bin : USD crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin Inject the new crushmap we created back into the cluster: USD ceph osd setcrushmap -i /etc/ceph/crushmap2.bin Example output: Note The number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map. Verify that the stretched rule created is now available for use. ceph osd crush rule ls Example output: Enable the stretch cluster mode. USD ceph mon enable_stretch_mode ceph7 stretch_rule datacenter In this example, ceph7 is the arbiter node, stretch_rule is the crush rule we created in the step and datacenter is the dividing bucket. Verify all our pools are using the stretch_rule CRUSH rule we have created in our Ceph cluster: USD for pool in USD(rados lspools);do echo -n "Pool: USD{pool}; ";ceph osd pool get USD{pool} crush_rule;done Example output: This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available. 3.6. Installing OpenShift Data Foundation on managed clusters To configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation operator must be installed first on each managed cluster. Prerequisites Ensure that you have met the hardware requirements for OpenShift Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements . Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. After installing the operator, create a StorageSystem using the option Full deployment type and Connect with external storage platform where your Backing storage type is Red Hat Ceph Storage . For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Use the following flags with the ceph-external-cluster-details-exporter.py script. At a minimum, you must use the following three flags with the ceph-external-cluster-details-exporter.py script : --rbd-data-pool-name With the name of the RBD pool that was created during RHCS deployment for OpenShift Container Platform. For example, the pool can be called rbdpool . --rgw-endpoint Provide the endpoint in the format <ip_address>:<port> . It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. --run-as-user With a different client name for each site. The following flags are optional if default values were used during the RHCS deployment: --cephfs-filesystem-name With the name of the CephFS filesystem we created during RHCS deployment for OpenShift Container Platform, the default filesystem name is cephfs . --cephfs-data-pool-name With the name of the CephFS data pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.data . --cephfs-metadata-pool-name With the name of the CephFS metadata pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.meta . Run the following command on the bootstrap node ceph1 , to get the IP for the RGW endpoints in datacenter1 and datacenter2: Example output: Example output: Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster1 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster2 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Save the two files generated in the bootstrap cluster (ceph1) ocp-cluster1.json and ocp-cluster2.json to your local machine. Use the contents of file ocp-cluster1.json on the OpenShift Container Platform console on cluster1 where external OpenShift Data Foundation is being deployed. Use the contents of file ocp-cluster2.json on the OpenShift Container Platform console on cluster2 where external OpenShift Data Foundation is being deployed. Review the settings and then select Create StorageSystem . Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): Wait for the status result to be Ready for both queries on the Primary managed cluster and the Secondary managed cluster . On the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. 3.7. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 3.8. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 3.9. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters Data Services Data policies . Click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4perf1-ocp4perf2 ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Replication policy is automatically set to sync based on the OpenShift clusters selected. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run commands for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster. Verify that the secret is propagated correctly on the Primary managed cluster and the Secondary managed cluster . Match the output with the s3SecretRef from the Hub cluster : 3.10. Configure DRClusters for fencing automation This configuration is required for enabling fencing prior to application failover. In order to prevent writes to the persistent volume from the cluster which is hit by a disaster, OpenShift DR instructs Red Hat Ceph Storage (RHCS) to fence the nodes of the cluster from the RHCS external storage. This section guides you on how to add the IPs or the IP Ranges for the nodes of the DRCluster. 3.10.1. Add node IP addresses to DRClusters Find the IP addresses for all of the OpenShift nodes in the managed clusters by running this command in the Primary managed cluster and the Secondary managed cluster . Example output: Once you have the IP addresses then the DRCluster resources can be modified for each managed cluster. Find the DRCluster names on the Hub Cluster. Example output: Edit each DRCluster to add your unique IP addresses after replacing <drcluster_name> with your unique name. Example output: Note There could be more than six IP addresses. Modify this DRCluster configuration also for IP addresses on the Secondary managed clusters in the peer DRCluster resource (e.g., ocp4perf2). 3.10.2. Add fencing annotations to DRClusters Add the following annotations to all the DRCluster resources. These annotations include details needed for the NetworkFence resource created later in these instructions (prior to testing application failover). Note Replace <drcluster_name> with your unique name. Example output: Make sure to add these annotations for both DRCluster resources (for example: ocp4perf1 and ocp4perf2 ). 3.11. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 3.11.1. Subscription-based applications 3.11.1.1. Creating a sample Subscription-based application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites Ensure that the Red Hat OpenShift GitOps operator is installed on the Hub cluster. For instructions, see RHACM documentation . When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions that belong together, refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they are also DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples where the Branch is release-4.14 and Path is busybox-odr-metro . Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Log in to your managed cluster where busybox was deployed by RHACM. Example output: 3.11.1.2. Apply Data policy to sample application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. You can also use the Add application resource option to add multiple resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Click View more details to view the status of ongoing activities with the policy in use with the application. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.2. ApplicationSet-based applications 3.11.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on the Hub cluster. For instructions, see RHACM documentation . Ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Procedure On the Hub cluster, navigate to All Clusters Applications and click Create application . Select type as Application set . In General step 1, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.14 Choose Path as busybox-odr-metro . Enter Remote namespace value. (example, busybox-sample) and click . Select Sync policy settings and click . You can choose one or more options. Add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 3.11.2.2. Apply Data policy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.3. Deleting sample application You can delete the sample application busybox using the RHACM console. Note The instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, delete the DRPlacementControl if it is not auto-deleted after deleting the busybox application. Log in to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . For ApplicationSet applications, select the project as openshift-gitops . Click OpenShift DR Hub Operator and then click the DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 3.12. Subscription-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is a unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 8. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . All the system workloads and their available resources are now transferred to the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.13. ApplicationSet-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is a unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Click Initiate . All the system workloads and their available resources are now transferred to the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 3.14. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Perform a relocation once the failed cluster is available and the application resources are cleaned up on the failed cluster. Prerequisite When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 8. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . All the system workloads and their available resources are now transferred to the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.15. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Click Initiate . All the system workloads and their available resources are now transferred to the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application.
[ "subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0", "subscription-manager repos --disable=\"*\" --enable=\"rhel9-for-x86_64-baseos-rpms\" --enable=\"rhel9-for-x86_64-appstream-rpms\"", "dnf update -y reboot", "subscription-manager repos --enable=\"ansible-2.9-for-rhel-9-x86_64-rpms\" --enable=\"rhceph-6-tools-for-rhel-9-x86_64-rpms\"", "hostnamectl set-hostname <short_name>", "hostname", "ceph1", "DOMAIN=\"example.domain.com\" cat <<EOF >/etc/hosts 127.0.0.1 USD(hostname).USD{DOMAIN} USD(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 USD(hostname).USD{DOMAIN} USD(hostname) localhost6 localhost6.localdomain6 EOF", "hostname -f", "ceph1.example.domain.com", "sudo dnf install -y cephadm-ansible", "cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF", "cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF", "ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b", "ceph6 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph4 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph3 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph2 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph5 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph7 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" }", "ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "cat <<EOF > /root/registry.json { \"url\":\"registry.redhat.io\", \"username\":\"User\", \"password\":\"Pass\" } EOF", "cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_type: mds service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080 EOF", "ip a | grep 10.0.40", "10.0.40.78", "cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json", "You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/", "ceph -s", "cluster: id: 3a801754-e01f-11ec-b7ab-005056838602 health: HEALTH_OK services: mon: 5 daemons, quorum ceph1,ceph2,ceph4,ceph5,ceph7 (age 4m) mgr: ceph1.khuuot(active, since 5m), standbys: ceph4.zotfsp osd: 12 osds: 12 up (since 3m), 12 in (since 4m) rgw: 2 daemons active (2 hosts, 1 zones) data: pools: 5 pools, 107 pgs objects: 191 objects, 5.3 KiB usage: 105 MiB used, 600 GiB / 600 GiB avail 105 active+clean", "ceph orch host ls", "HOST ADDR LABELS STATUS ceph1 10.0.40.78 _admin osd mon mgr ceph2 10.0.40.35 osd mon ceph3 10.0.40.24 osd mds rgw ceph4 10.0.40.185 osd mon mgr ceph5 10.0.40.88 osd mon ceph6 10.0.40.66 osd mds rgw ceph7 10.0.40.221 mon", "ceph orch ps | grep mon | awk '{print USD1 \" \" USD2}'", "mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7", "ceph orch ps | grep mgr | awk '{print USD1 \" \" USD2}'", "mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5", "ceph osd tree", "ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87900 root default -16 0.43950 datacenter DC1 -11 0.14650 host ceph1 2 ssd 0.14650 osd.2 up 1.00000 1.00000 -3 0.14650 host ceph2 3 ssd 0.14650 osd.3 up 1.00000 1.00000 -13 0.14650 host ceph3 4 ssd 0.14650 osd.4 up 1.00000 1.00000 -17 0.43950 datacenter DC2 -5 0.14650 host ceph4 0 ssd 0.14650 osd.0 up 1.00000 1.00000 -9 0.14650 host ceph5 1 ssd 0.14650 osd.1 up 1.00000 1.00000 -7 0.14650 host ceph6 5 ssd 0.14650 osd.5 up 1.00000 1.00000", "ceph osd pool create 32 32 ceph osd pool application enable rbdpool rbd", "ceph osd lspools | grep rbdpool", "3 rbdpool", "ceph orch ps | grep mds", "mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9", "ceph fs volume create cephfs", "ceph fs status", "cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph6.ggjywj Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 96.0k 284G cephfs.cephfs.data data 0 284G STANDBY MDS cephfs.ceph3.ogcqkl", "ceph orch ps | grep rgw", "rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9", "ceph mon dump | grep election_strategy", "dumped monmap epoch 9 election_strategy: 1", "ceph mon set election_strategy connectivity", "ceph mon dump | grep election_strategy", "dumped monmap epoch 10 election_strategy: 3", "ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3", "ceph mon dump", "epoch 17 fsid dd77f050-9afe-11ec-a56c-029f8148ea14 last_changed 2022-03-04T07:17:26.913330+0000 created 2022-03-03T14:33:22.957190+0000 min_mon_release 16 (pacific) election_strategy: 3 0: [v2:10.0.143.78:3300/0,v1:10.0.143.78:6789/0] mon.ceph1; crush_location {datacenter=DC1} 1: [v2:10.0.155.185:3300/0,v1:10.0.155.185:6789/0] mon.ceph4; crush_location {datacenter=DC2} 2: [v2:10.0.139.88:3300/0,v1:10.0.139.88:6789/0] mon.ceph5; crush_location {datacenter=DC2} 3: [v2:10.0.150.221:3300/0,v1:10.0.150.221:6789/0] mon.ceph7; crush_location {datacenter=DC3} 4: [v2:10.0.155.35:3300/0,v1:10.0.155.35:6789/0] mon.ceph2; crush_location {datacenter=DC1}", "dnf -y install ceph-base", "ceph osd getcrushmap > /etc/ceph/crushmap.bin", "crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt", "vim /etc/ceph/crushmap.txt", "rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit } end crush map", "crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin", "ceph osd setcrushmap -i /etc/ceph/crushmap2.bin", "17", "ceph osd crush rule ls", "replicated_rule stretch_rule", "ceph mon enable_stretch_mode ceph7 stretch_rule datacenter", "for pool in USD(rados lspools);do echo -n \"Pool: USD{pool}; \";ceph osd pool get USD{pool} crush_rule;done", "Pool: device_health_metrics; crush_rule: stretch_rule Pool: cephfs.cephfs.meta; crush_rule: stretch_rule Pool: cephfs.cephfs.data; crush_rule: stretch_rule Pool: .rgw.root; crush_rule: stretch_rule Pool: default.rgw.log; crush_rule: stretch_rule Pool: default.rgw.control; crush_rule: stretch_rule Pool: default.rgw.meta; crush_rule: stretch_rule Pool: rbdpool; crush_rule: stretch_rule", "ceph orch ps | grep rgw.objectgw", "rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp", "host ceph3.example.com host ceph6.example.com", "ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --<rgw-endpoint> XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json", "oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'", "oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt", "apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config", "oc create -f cm-clusters-crt.yaml", "configmap/user-ca-bundle created", "oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'", "proxy.config.openshift.io/cluster patched", "oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'", "Succeeded", "oc get drclusters", "NAME AGE ocp4perf1 4m42s ocp4perf2 4m42s", "oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'", "Succeeded", "oc get csv,pod -n openshift-dr-system", "NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.14.0 Openshift DR Cluster Operator 4.14.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h", "get secrets -n openshift-dr-system | grep Opaque", "get cm -n openshift-operators ramen-hub-operator-config -oyaml", "oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type==\"ExternalIP\")].address}{\"\\n\"}{end}'", "10.70.56.118 10.70.56.193 10.70.56.154 10.70.56.242 10.70.56.136 10.70.56.99", "oc get drcluster", "NAME AGE ocp4perf1 5m35s ocp4perf2 5m35s", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: s3ProfileName: s3profile-<drcluster_name>-ocs-external-storagecluster ## Add this section cidrs: - <IP_Address1>/32 - <IP_Address2>/32 - <IP_Address3>/32 - <IP_Address4>/32 - <IP_Address5>/32 - <IP_Address6>/32 [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: ## Add this section annotations: drcluster.ramendr.openshift.io/storage-clusterid: openshift-storage drcluster.ramendr.openshift.io/storage-driver: openshift-storage.rbd.csi.ceph.com drcluster.ramendr.openshift.io/storage-secret-name: rook-csi-rbd-provisioner drcluster.ramendr.openshift.io/storage-secret-namespace: openshift-storage [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get pods,pvc -n busybox-sample", "NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s", "tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/metro-dr-solution
12.3. Translator Properties
12.3. Translator Properties Translators can have a number of configurable properties. These are divided among the following categories: Execution Properties - these properties determine aspects of how data is retrieved. A list of properties common to all translators are provided in Section 12.5, "Base Execution Properties" . Note The execution properties for a translator typically have reasonable defaults. For specific translator types, base execution properties are already tuned to match the source. In most cases the user will not need to adjust their values. Importer Properties - these properties determine what metadata is read for import. There are no common importer properties. Note The import capabilities of translators is currently only used by dynamic VDBs and not by Teiid Designer. See Section 6.6, "Dynamic VDBs" .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/translator_properties
Part II. Storage Administration
Part II. Storage Administration The Storage Administration section starts with storage considerations for Red Hat Enterprise Linux 7. Instructions regarding partitions, logical volume management, and swap partitions follow this. Disk Quotas, RAID systems are , followed by the functions of mount command, volume_key, and acls. SSD tuning, write barriers, I/O limits and diskless systems follow this. The large chapter of Online Storage is , and finally device mapper multipathing and virtual storage to finish.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/part-storage-admin
Chapter 7. Connecting to external services
Chapter 7. Connecting to external services You can connect a router to an external service such as a message broker. The services may be running in the same OpenShift cluster as the router network, or running outside of OpenShift. Prerequisites You must have access to a message broker. Procedure This procedure describes how to connect a router to a broker and configure a link route to connect messaging clients to it. Start editing the Interconnect Custom Resource YAML file that describes the router deployment that you want to connect to a broker. In the spec section, configure the connection and link route. Sample router-mesh.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: ... connectors: 1 - name: my-broker host: broker port: 5672 routeContainer: true linkRoutes: 2 - prefix: q1 direction: in connection: my-broker - prefix: q1 direction: out connection: my-broker 1 The connection to be used to connect this router to the message broker. The Operator will configure this connection from every router defined in this router deployment to the broker. If you only want a single connection between the router network and the broker, then configure a listener instead of a connector and have the broker establish the connection. 2 The link route configuration. It defines the incoming and outgoing links and connection to be used to connect messaging applications to the message broker. Verify that the router has established the link route to the message broker. Additional resources For more information about link routes, see Creating link routes .
[ "oc edit -f router-mesh.yaml", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: connectors: 1 - name: my-broker host: broker port: 5672 routeContainer: true linkRoutes: 2 - prefix: q1 direction: in connection: my-broker - prefix: q1 direction: out connection: my-broker", "oc exec router-mesh-fb6bc5797-crvb6 -it -- qdstat --linkroutes Link Routes address dir distrib status ==================================== q1 in linkBalanced active q1 out linkBalanced active" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/deploying_amq_interconnect_on_openshift/connecting-external-services-router-ocp
Chapter 7. Installing a cluster on GCP into an existing VPC
Chapter 7. Installing a cluster on GCP into an existing VPC In OpenShift Container Platform version 4.16, you can install a cluster into an existing Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 7.2. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in Google Cloud Platform (GCP). By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking for the subnets. 7.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The subnets must be within the machine network. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide one subnet for control-plane machines and one subnet for compute machines. The subnet's CIDRs belong to the machine CIDR that you specified. 7.2.3. Division of permissions Some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. 7.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 7.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 7.6.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 7.2. Machine series for 64-bit ARM machines Tau T2A 7.6.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 7.6.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 7.6.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 7.6.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 1 15 17 18 24 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 7.6.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 7.6.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 7.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 7.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 7.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/installing-gcp-vpc
Chapter 1. Testing Camel K integration locally
Chapter 1. Testing Camel K integration locally This chapter provides details on how to use Camel jBang to locally test a Camel K integration. Section 1.1, "Using Camel jBang to locally test a Camel K integration" 1.1. Using Camel jBang to locally test a Camel K integration Testing is one of the main operations performed repeatedly while building any application. With the advent of Camel JBang , we have a unified place that can be used to perform testing and fine tuning locally before moving to a higher environment. Testing or fine tuning an integration directly connected to a Cloud Native environment is a bit cumbersome. You must be connected to the cluster, or alternatively, you need a local Kubernetes cluster running on your machine (Minikube, Kind etc). Most of the time, the aspects inherent to the cluster fine tuning are arriving late in the development. Therefore, it is good to have a ligther way of testing our applications locally and, move to a deployment stage where we can apply that tuning, typical to a cloud native environment. kamel local is the command used to test an Integration locally in the past. However, it overlaps the effort done by Camel community to having a single CLI that is used to test locally any Camel application independently, where this is going to be deployed. 1.1.1. Camel JBang installation Firstly, we need to install and get familiar with jbang and camel CLIs. You can follow the official documentation about Camel JBang to install the CLIs to your local environment. After this, we can see how to test an Integration for Camel K with Camel JBang. 1.1.2. Simple application development The first application we develop is a simple one, and it defines the process you must follow when testing any Integration that is eventually deployed in Kubernetes via Camel K. verify the target version of Camel in your Camel K installation. With this information we can ensure to test locally against the same version that we will be later deploying in a cluster. The commands above are useful to find out what is the Camel version used by the runtime in your cluster Camel K installation. Our target is Camel version 3.18.3. The easiest way to initialize a Camel route is to run camel init command: At this stage, we can edit the file with the logic we need for our integration, or, simply run it: A local java process starts with a Camel application running. No need to create a Maven project, all the boilerplate is on Camel JBang! However, you may notice that the Camel version used is different from the one we want to target. This is because your Camel JBang is using a different version of Camel. No worry, we can re-run this application but specifying the Camel version we want to run: Note Camel JBang uses a default Camel version and if you want you can use -Dcamel.jbang.version option to explicitly set a Camel version overwriting the default. The step is to run it in a Kubernetes cluster where Camel K is installed. Let us use the Camel K plugin for Camel JBang here instead of kamel CLI. This way, you can use the same JBang tooling to both run the Camel K integration locally and on the K8s cluster with the operator. The JBang plugin documentation can be found here: Camel JBang Kubernetes . You see that the Camel K operator takes care to do the necessary transformation and build the Integration and related resources according to the expected lifecycle. Once this is live, you can follow up with the operations you usually do on a deployed Integration. The benefit of this process is that you need not worry about the remote cluster until you are satisfied with your Integration tuned locally. 1.1.3. Fine tuning for Cloud Once your Integration ready, you must take care about the kind of tuning, related to cluster deployment. Having this, you need not worry on deployment details at an early stage of the development. Or you can even have a separation of roles in your company where the domain expert may develop the integration locally and the cluster expert may do the deployment at a later stage. Let us see an example about how to develop an integration that will later need some fine tuning in the cluster. import org.apache.camel.builder.RouteBuilder; public class MyJBangRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/tmp/input") .convertBodyTo(String.class) .log("Processing file USD{headers.CamelFileName} with content: USD{body}") /* .filter(simple("USD{body} !contains 'checked'")) .log("WARN not checked: USD{body}") .to("file:/tmp/discarded") .end() .to("file:/tmp/output"); */ .choice() .when(simple("USD{body} !contains 'checked'")) .log("WARN not checked!") .to("file:/tmp/discarded") .otherwise() .to("file:/tmp/output") .end(); } } There is a process that is in charge to write files into a directory. You must filter those files based on their content. We have left the code comments on purpose because it was the way we developed iteratively. We tested something locally with Camel JBang, until we came to the final version of the integration. We had tested the Filter EIP but while testing we needed a Content Based Router EIP instead. It must sound a familiar process as it happens probably every time we develop something. Now that we are ready, we run a last round of testing locally via Camel JBang: We have tested adding files on the input directory. Ready to promote to my development cluster! Use the Camel K JBang plugin here to run the integration on K8s so you do not need to switch tooling. Run the following command: The Integration started correctly, but we are using a file system that is local to the Pod where the Integration is running. 1.1.3.1. Kubernetes fine tuning Now, let us configure our application for the cloud. Cloud Native development must take into consideration a series of challenges that are implicit in the way how this new paradigm works (as a reference see the 12 factors ). Kubernetes can be sometimes a bit difficult to fine tune. Many resources to edit and check. Camel K provide a user friendly way to apply most of the tuning your application needs directly in the kamel run command (or in the modeline ). You must get familiar with Camel K Traits . In this case we want to use certain volumes we had availed in our cluster. We can use the --volume option (syntactic sugar of mount trait ) and enable them easily. We can read and write on those volumes from some other Pod : it depends on the architecture of our Integration process. You must iterate this tuning as well, but at least, now that the internals of the route have been polished locally you must focus on deployment aspects only. And, once you are ready with this, take the benefit of kamel promote to move your Integration through various stages of development . 1.1.4. How to test Kamelet locally? Another benefit of Camel JBang is the ability to test a Kamelet locally. Until now, the easiest possibility to test a Kamelet was to upload to a Kubernetes cluster and to run some Integration using it via Camel K. Let us develop a simple Kamelet for this scope. It is a Coffee source we are using to generate random coffee events. apiVersion: camel.apache.org/v1 kind: Kamelet metadata: name: coffee-source annotations: camel.apache.org/kamelet.support.level: "Stable" camel.apache.org/catalog.version: "4.7.0-SNAPSHOT" camel.apache.org/kamelet.icon: "data:image/svg+xml;base64,<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.w3.org/2000/svg" height="92pt" width="92pt" version="1.0" xmlns:cc="http://creativecommons.org/ns#" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<defs>
		<linearGradient id="a">
			<stop stop-color="#ffffff" stop-opacity=".5" offset="0"/>
			<stop stop-color="#ffffff" stop-opacity=".1" offset="1"/>
		</linearGradient>
		<linearGradient id="d" y2="62.299" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="33.61" gradientTransform="matrix(.78479 0 0 1.2742 -25.691 -8.5635)" x2="95.689" x1="59.099"/>
		<linearGradient id="c" y2="241.09" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="208.04" gradientTransform="matrix(1.9777 0 0 .50563 -25.691 -8.5635)" x2="28.179" x1="17.402"/>
		<linearGradient id="b" y2="80.909" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="55.988" gradientTransform="matrix(1.5469 0 0 .64647 -25.691 -8.5635)" x2="87.074" x1="70.063"/>
	</defs>
	<path stroke-linejoin="round" d="m12.463 24.886c2.352 1.226 22.368 5.488 33.972 5.226 16.527 0.262 30.313-6.049 32.927-7.055 0 1.433-2.307 10.273-2.614 15.679 0 5.448 1.83 28.415 2.091 33.711 0.868 6.178 2.704 13.861 4.443 19.077 1.829 3.553-23.563 9.856-34.757 10.456-12.602 0.78-38.937-4.375-37.369-8.366 0-3.968 3.659-13.383 3.659-19.599 0.522-6.025-0.262-23.273-0.262-30.836-0.261-6.78-1.053-12.561-2.09-18.293z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbd900"/>
	<path d="m10.633 94.659c-5.5851-1.331-7.8786 10.111-1.8288 12.021 6.3678 3.75 29.703 7.06 39.199 6.27 11.101-0.26 31.192-4.44 35.801-8.36 6.134-3.92 5.466-13.066 0-12.021-3.278 3.658-26.699 8.881-36.585 9.411-9.223 0.78-30.749-2.53-36.586-7.321z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbf3bf"/>
	<path stroke-linejoin="bevel" d="m77.382 34.046c1.245-3.212 9.639-6.972 12.364-7.516 4.686-1.05 12.384-1.388 16.764 4.28 7.94 10.323 6.76 28.626 2.86 34.638-2.78 5.104-9.371 10.282-14.635 11.878-5.151 1.533-12.707 2.661-14.333 3.711-0.35-1.296-1.327-7.388-1.38-9.071 1.95 0.128 7.489-0.893 11.695-1.868 3.902-0.899 6.45-3.274 9.333-6.222 5-4.7 4.35-21.16 0.54-25.057-2.233-2.262-6.849-3.904-9.915-3.323-4.992 1.032-13.677 7.366-13.677 6.98-0.508-2.08-0.25-6.159 0.384-8.43z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbf3bf"/>
	<path stroke-linejoin="round" d="m32.022 38.368c1.655 1.206-1.355 16.955-0.942 28.131 0.414 14.295 1.444 23.528-0.521 24.635-3.108 1.675-9.901-0.135-12.046-2.42-1.273-1.507 1.806-10.24 2.013-16.429-0.414-8.711-1.703-33.303-0.461-34.778 2.252-2.053 9.681-1.152 11.957 0.861z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m40.612 39.037c-1.478 1.424-0.063 19.625-0.063 22.559 0.305 3.808-1.101 27.452-0.178 28.954 1.848 2.122 10.216 2.442 13.001-0.356 1.505-1.875-0.478-22.544-0.478-27.68 0-5.51 1.407-22.052-0.44-23.58-2.033-2.149-8.44-3.18-11.842 0.103z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbe600"/>
	<path stroke-linejoin="round" d="m60.301 37.593c-1.658 1.256 1.179 15.8 1.194 26.982 0.137 14.299-1.245 24.662 0.824 25.709 3.268 1.578 10.881-1.542 13-3.891 1.253-1.545-1.411-10.179-2.082-16.358-0.984-8.164 0.148-33.128-1.189-34.564-2.402-1.984-9.482 0.04-11.747 2.122z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m53.582 31.12c-4.989 1.109-36.588-3.141-39.729-4.804 0.924 4.62 3.141 45.272 1.663 49.892 0.185 2.032-3.88 15.152-3.695 17.924 17.184-68.37 39.728-48.968 41.761-63.012z" fill-rule="evenodd" fill="url(#d)"/>
	<path d="m10.027 95.309c-3.0515-0.897-5.2053 6.821-2.872 9.151 5.743 2.69 13.282-2.33 38.23-1.61-12.743-0.36-31.589-2.874-35.358-7.541z" fill-rule="evenodd" fill="url(#c)"/>
	<path d="m78.59 33.567c4.487-4.488 8.794-5.564 13.999-6.462 8.791-2.333 14.901 3.769 16.871 11.846-4.49-7.179-10.23-8.256-14.178-8.436-4.128 0.718-15.795 7.898-16.872 9.154s-0.718-4.128 0.18-6.102z" fill-rule="evenodd" fill="url(#b)"/>
	<path stroke-linejoin="round" d="m11.408 77.34c2.3832 1.159 4.2811-1.5693 3.4649-3.0303 0.91503 0.08658 1.7948-0.3254 1.7948-1.7948 0.72044-0.72044-0.36461-1.8544-0.36461-2.7357-0.99354-0.99354 0.0056-2.165 0.0056-3.7257 0-1.5535 0.89742-2.5024 0.89742-4.1281 0-2.3611 2.0594-1.1807 0.89742-4.6666 1.0882-0.42455 2.2741-1.4845 0.89742-2.6923 2.1601-0.23952 3.2186-2.3542 0.53845-4.6666 4.0734 0-4.2302-8.7305 2.6923-6.9999 2.222-0.55551 1.7948-2.2151 1.7948-4.3076 2.8717 3.9487 6.8954 2.6213 7.5383 0 1.3486 4.3998 10.59 2.5869 10.59-2.8717 0.17948 6.7502 7.1177 3.4046 8.4358 3.9486-1.6154 1.8662 1.5841 9.0796 4.3076 9.1537-6.3097 4.7323-5.1729 13.001 2.5128 14.538 3.8938 0 5.3845-3.2785 5.3845-7.8973 1.2564 2.6447 6.972 4.2797 6.9999-0.17948 2.8717 5.5446 6.4959-1.4704 4.3076-2.1538 5.0256 1.9057 3.2128-6.9811 1.3785-9.056 2.8718-0.91448 1.8346-7.6184 0.0574-9.7898 2.6212 2.6652 6.7385-0.83112 6.282-5.923 1.228 3.4671 9.1475-0.36828 3.7692-8.4358 0-1.5451-4.4871-1.7488-5.564-0.53845-0.01541-5.4461-4.0997-9.6921-6.9999-8.6152 1.799-2.6932-9.048-4.8999-11.308-0.539 1.351-5.7012-13.81-9.3336-14.179-6.1029-1.748-2.5128-11.771-2.5586-14.718 6.2819 0-4.8606-16.309-6.9999-15.974 0.35897-3.4899-2.4331-9.2274 0.35897-8.7947 3.2307-5.3845-2.7034-7.842 9.5611-3.4102 10.231-2.5128 2.2624-2.6923 11.311 0.53845 11.128-1.9743 2.1297-0.89742 8.4366 1.2564 8.6152-1.6794 2.3206 0.2457 13.674 7.1794 11.846 0 2.5234 0.70877 4.6941-0.17948 7.3588 0 1.5455-0.89742 2.8528-0.89742 4.4871 0.37206 0.74412-1.2597 2.7244 0.53845 3.9486-4.2167 1.7593-3.3024 4.4642-1.6701 5.7226z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#ffffff"/>
	<path stroke-linejoin="round" d="m11.317 32.574c-1.5098-1.65 1.221-7.04 4.242-6.763 0.689-2.474 2.586-2.892 4.688-2.187-1.048-2.045 1.503-3.992 3.75-1.682 1.517-2.622 4.677-4.645 6.356-3.231-0.132-3.373 6.063-6.794 8.331-3.837 0 0.606-0.362 1.875 0 1.875" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
	<path stroke-linejoin="round" d="m48.372 22.374c-0.104-4.721 14.009-8.591 11.25-0.313 1.269-0.634 6.875-1.299 5.844 2.314 4.123-0.466 10.39 1.104 6.662 6.688 2.396 1.806 1.331 6.696-0.319 5.061" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
</svg>
" camel.apache.org/provider: "Apache Software Foundation" camel.apache.org/kamelet.group: "Coffees" camel.apache.org/kamelet.namespace: "Dataset" labels: camel.apache.org/kamelet.type: "source" spec: definition: title: "Coffee Source" description: "Produces periodic events about coffees!" type: object properties: period: title: Period description: The time interval between two events type: integer default: 5000 types: out: mediaType: application/json dependencies: - "camel:timer" - "camel:http" - "camel:kamelet" template: from: uri: "timer:coffee" parameters: period: "{{period}}" steps: - to: https://random-data-api.com/api/coffee/random_coffee - removeHeaders: pattern: '*' - to: "kamelet:sink" To test it, we can use a simple Integration to log its content: - from: uri: "kamelet:coffee-source?period=5000" steps: - log: "USD{body}" Now we can run: This is a boost while you are programming a Kamelet, because you can have a quick feedback without the need of a cluster. Once ready, you can continue your development as usual uploading the Kamelet to the cluster and using in your Camel K integrations.
[ "kamel version -a -v | grep Runtime Runtime Version: 3.8.1 kubectl get camelcatalog camel-catalog-3.8.1 -o yaml | grep camel\\.version camel.version: 3.8.1", "camel init HelloJBang.java", "camel run HelloJBang.java 2022-11-23 12:11:05.407 INFO 52841 --- [ main] org.apache.camel.main.MainSupport : Apache Camel (JBang) 3.18.1 is starting 2022-11-23 12:11:05.470 INFO 52841 --- [ main] org.apache.camel.main.MainSupport : Using Java 11.0.17 with PID 52841. Started by squake in /home/squake/workspace/jbang/camel-blog 2022-11-23 12:11:07.537 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) is starting 2022-11-23 12:11:07.675 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:1) 2022-11-23 12:11:07.676 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started java (timer://java) 2022-11-23 12:11:07.676 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) started in 397ms (build:118ms init:140ms start:139ms JVM-uptime:3s) 2022-11-23 12:11:08.705 INFO 52841 --- [ - timer://java] HelloJBang.java:14 : Hello Camel from java 2022-11-23 12:11:09.676 INFO 52841 --- [ - timer://java] HelloJBang.java:14 : Hello Camel from java", "jbang run -Dcamel.jbang.version=3.18.3 camel@apache/camel run HelloJBang.java [1] 2022-11-23 11:13:02,825 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.18.3 (camel-1) started in 70ms (build:0ms init:61ms start:9ms)", "import org.apache.camel.builder.RouteBuilder; public class MyJBangRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/tmp/input\") .convertBodyTo(String.class) .log(\"Processing file USD{headers.CamelFileName} with content: USD{body}\") /* .filter(simple(\"USD{body} !contains 'checked'\")) .log(\"WARN not checked: USD{body}\") .to(\"file:/tmp/discarded\") .end() .to(\"file:/tmp/output\"); */ .choice() .when(simple(\"USD{body} !contains 'checked'\")) .log(\"WARN not checked!\") .to(\"file:/tmp/discarded\") .otherwise() .to(\"file:/tmp/output\") .end(); } }", "jbang run -Dcamel.jbang.version=3.18.3 camel@apache/camel run MyJBangRoute.java 2022-11-23 12:19:11.516 INFO 55909 --- [ main] org.apache.camel.main.MainSupport : Apache Camel (JBang) 3.18.3 is starting 2022-11-23 12:19:11.592 INFO 55909 --- [ main] org.apache.camel.main.MainSupport : Using Java 11.0.17 with PID 55909. Started by squake in /home/squake/workspace/jbang/camel-blog 2022-11-23 12:19:14.020 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.3 (CamelJBang) is starting 2022-11-23 12:19:14.220 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:1) 2022-11-23 12:19:14.220 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started route1 (file:///tmp/input) 2022-11-23 12:19:14.220 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.3 (CamelJBang) started in 677ms (build:133ms init:344ms start:200ms JVM-uptime:3s) 2022-11-23 12:19:27.757 INFO 55909 --- [le:///tmp/input] MyJBangRoute.java:11 : Processing file file_1669202367381 with content: some entry 2022-11-23 12:19:27.758 INFO 55909 --- [le:///tmp/input] MyJBangRoute:21 : WARN not checked! 2022-11-23 12:19:32.276 INFO 55909 --- [le:///tmp/input] MyJBangRoute.java:11 : Processing file file_1669202372252 with content: some entry checked", "camel k run MyJBangRoute.java", "kamel run MyJBangRoute.java --volume my-pv-claim-input:/tmp/input --volume my-pv-claim-output:/tmp/output --volume my-pv-claim-discarded:/tmp/discarded --dev [1] 2022-11-23 11:39:26,281 INFO [route1] (Camel (camel-1) thread #1 - file:///tmp/input) Processing file file_1669203565971 with content: some entry [1] [1] 2022-11-23 11:39:26,303 INFO [route1] (Camel (camel-1) thread #1 - file:///tmp/input) WARN not checked! [1] 2022-11-23 11:39:32,322 INFO [route1] (Camel (camel-1) thread #1 - file:///tmp/input) Processing file file_1669203571981 with content: some entry checked", "apiVersion: camel.apache.org/v1 kind: Kamelet metadata: name: coffee-source annotations: camel.apache.org/kamelet.support.level: \"Stable\" camel.apache.org/catalog.version: \"4.7.0-SNAPSHOT\" camel.apache.org/kamelet.icon: \"data:image/svg+xml;base64,<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.w3.org/2000/svg" height="92pt" width="92pt" version="1.0" xmlns:cc="http://creativecommons.org/ns#" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<defs>
		<linearGradient id="a">
			<stop stop-color="#ffffff" stop-opacity=".5" offset="0"/>
			<stop stop-color="#ffffff" stop-opacity=".1" offset="1"/>
		</linearGradient>
		<linearGradient id="d" y2="62.299" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="33.61" gradientTransform="matrix(.78479 0 0 1.2742 -25.691 -8.5635)" x2="95.689" x1="59.099"/>
		<linearGradient id="c" y2="241.09" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="208.04" gradientTransform="matrix(1.9777 0 0 .50563 -25.691 -8.5635)" x2="28.179" x1="17.402"/>
		<linearGradient id="b" y2="80.909" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="55.988" gradientTransform="matrix(1.5469 0 0 .64647 -25.691 -8.5635)" x2="87.074" x1="70.063"/>
	</defs>
	<path stroke-linejoin="round" d="m12.463 24.886c2.352 1.226 22.368 5.488 33.972 5.226 16.527 0.262 30.313-6.049 32.927-7.055 0 1.433-2.307 10.273-2.614 15.679 0 5.448 1.83 28.415 2.091 33.711 0.868 6.178 2.704 13.861 4.443 19.077 1.829 3.553-23.563 9.856-34.757 10.456-12.602 0.78-38.937-4.375-37.369-8.366 0-3.968 3.659-13.383 3.659-19.599 0.522-6.025-0.262-23.273-0.262-30.836-0.261-6.78-1.053-12.561-2.09-18.293z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbd900"/>
	<path d="m10.633 94.659c-5.5851-1.331-7.8786 10.111-1.8288 12.021 6.3678 3.75 29.703 7.06 39.199 6.27 11.101-0.26 31.192-4.44 35.801-8.36 6.134-3.92 5.466-13.066 0-12.021-3.278 3.658-26.699 8.881-36.585 9.411-9.223 0.78-30.749-2.53-36.586-7.321z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbf3bf"/>
	<path stroke-linejoin="bevel" d="m77.382 34.046c1.245-3.212 9.639-6.972 12.364-7.516 4.686-1.05 12.384-1.388 16.764 4.28 7.94 10.323 6.76 28.626 2.86 34.638-2.78 5.104-9.371 10.282-14.635 11.878-5.151 1.533-12.707 2.661-14.333 3.711-0.35-1.296-1.327-7.388-1.38-9.071 1.95 0.128 7.489-0.893 11.695-1.868 3.902-0.899 6.45-3.274 9.333-6.222 5-4.7 4.35-21.16 0.54-25.057-2.233-2.262-6.849-3.904-9.915-3.323-4.992 1.032-13.677 7.366-13.677 6.98-0.508-2.08-0.25-6.159 0.384-8.43z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbf3bf"/>
	<path stroke-linejoin="round" d="m32.022 38.368c1.655 1.206-1.355 16.955-0.942 28.131 0.414 14.295 1.444 23.528-0.521 24.635-3.108 1.675-9.901-0.135-12.046-2.42-1.273-1.507 1.806-10.24 2.013-16.429-0.414-8.711-1.703-33.303-0.461-34.778 2.252-2.053 9.681-1.152 11.957 0.861z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m40.612 39.037c-1.478 1.424-0.063 19.625-0.063 22.559 0.305 3.808-1.101 27.452-0.178 28.954 1.848 2.122 10.216 2.442 13.001-0.356 1.505-1.875-0.478-22.544-0.478-27.68 0-5.51 1.407-22.052-0.44-23.58-2.033-2.149-8.44-3.18-11.842 0.103z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbe600"/>
	<path stroke-linejoin="round" d="m60.301 37.593c-1.658 1.256 1.179 15.8 1.194 26.982 0.137 14.299-1.245 24.662 0.824 25.709 3.268 1.578 10.881-1.542 13-3.891 1.253-1.545-1.411-10.179-2.082-16.358-0.984-8.164 0.148-33.128-1.189-34.564-2.402-1.984-9.482 0.04-11.747 2.122z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m53.582 31.12c-4.989 1.109-36.588-3.141-39.729-4.804 0.924 4.62 3.141 45.272 1.663 49.892 0.185 2.032-3.88 15.152-3.695 17.924 17.184-68.37 39.728-48.968 41.761-63.012z" fill-rule="evenodd" fill="url(#d)"/>
	<path d="m10.027 95.309c-3.0515-0.897-5.2053 6.821-2.872 9.151 5.743 2.69 13.282-2.33 38.23-1.61-12.743-0.36-31.589-2.874-35.358-7.541z" fill-rule="evenodd" fill="url(#c)"/>
	<path d="m78.59 33.567c4.487-4.488 8.794-5.564 13.999-6.462 8.791-2.333 14.901 3.769 16.871 11.846-4.49-7.179-10.23-8.256-14.178-8.436-4.128 0.718-15.795 7.898-16.872 9.154s-0.718-4.128 0.18-6.102z" fill-rule="evenodd" fill="url(#b)"/>
	<path stroke-linejoin="round" d="m11.408 77.34c2.3832 1.159 4.2811-1.5693 3.4649-3.0303 0.91503 0.08658 1.7948-0.3254 1.7948-1.7948 0.72044-0.72044-0.36461-1.8544-0.36461-2.7357-0.99354-0.99354 0.0056-2.165 0.0056-3.7257 0-1.5535 0.89742-2.5024 0.89742-4.1281 0-2.3611 2.0594-1.1807 0.89742-4.6666 1.0882-0.42455 2.2741-1.4845 0.89742-2.6923 2.1601-0.23952 3.2186-2.3542 0.53845-4.6666 4.0734 0-4.2302-8.7305 2.6923-6.9999 2.222-0.55551 1.7948-2.2151 1.7948-4.3076 2.8717 3.9487 6.8954 2.6213 7.5383 0 1.3486 4.3998 10.59 2.5869 10.59-2.8717 0.17948 6.7502 7.1177 3.4046 8.4358 3.9486-1.6154 1.8662 1.5841 9.0796 4.3076 9.1537-6.3097 4.7323-5.1729 13.001 2.5128 14.538 3.8938 0 5.3845-3.2785 5.3845-7.8973 1.2564 2.6447 6.972 4.2797 6.9999-0.17948 2.8717 5.5446 6.4959-1.4704 4.3076-2.1538 5.0256 1.9057 3.2128-6.9811 1.3785-9.056 2.8718-0.91448 1.8346-7.6184 0.0574-9.7898 2.6212 2.6652 6.7385-0.83112 6.282-5.923 1.228 3.4671 9.1475-0.36828 3.7692-8.4358 0-1.5451-4.4871-1.7488-5.564-0.53845-0.01541-5.4461-4.0997-9.6921-6.9999-8.6152 1.799-2.6932-9.048-4.8999-11.308-0.539 1.351-5.7012-13.81-9.3336-14.179-6.1029-1.748-2.5128-11.771-2.5586-14.718 6.2819 0-4.8606-16.309-6.9999-15.974 0.35897-3.4899-2.4331-9.2274 0.35897-8.7947 3.2307-5.3845-2.7034-7.842 9.5611-3.4102 10.231-2.5128 2.2624-2.6923 11.311 0.53845 11.128-1.9743 2.1297-0.89742 8.4366 1.2564 8.6152-1.6794 2.3206 0.2457 13.674 7.1794 11.846 0 2.5234 0.70877 4.6941-0.17948 7.3588 0 1.5455-0.89742 2.8528-0.89742 4.4871 0.37206 0.74412-1.2597 2.7244 0.53845 3.9486-4.2167 1.7593-3.3024 4.4642-1.6701 5.7226z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#ffffff"/>
	<path stroke-linejoin="round" d="m11.317 32.574c-1.5098-1.65 1.221-7.04 4.242-6.763 0.689-2.474 2.586-2.892 4.688-2.187-1.048-2.045 1.503-3.992 3.75-1.682 1.517-2.622 4.677-4.645 6.356-3.231-0.132-3.373 6.063-6.794 8.331-3.837 0 0.606-0.362 1.875 0 1.875" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
	<path stroke-linejoin="round" d="m48.372 22.374c-0.104-4.721 14.009-8.591 11.25-0.313 1.269-0.634 6.875-1.299 5.844 2.314 4.123-0.466 10.39 1.104 6.662 6.688 2.396 1.806 1.331 6.696-0.319 5.061" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
</svg>
\" camel.apache.org/provider: \"Apache Software Foundation\" camel.apache.org/kamelet.group: \"Coffees\" camel.apache.org/kamelet.namespace: \"Dataset\" labels: camel.apache.org/kamelet.type: \"source\" spec: definition: title: \"Coffee Source\" description: \"Produces periodic events about coffees!\" type: object properties: period: title: Period description: The time interval between two events type: integer default: 5000 types: out: mediaType: application/json dependencies: - \"camel:timer\" - \"camel:http\" - \"camel:kamelet\" template: from: uri: \"timer:coffee\" parameters: period: \"{{period}}\" steps: - to: https://random-data-api.com/api/coffee/random_coffee - removeHeaders: pattern: '*' - to: \"kamelet:sink\"", "- from: uri: \"kamelet:coffee-source?period=5000\" steps: - log: \"USD{body}\"", "camel run --local-kamelet-dir=</path/to/local/kamelets/dir> coffee-integration.yaml 2022-11-24 11:27:29.634 INFO 39527 --- [ main] org.apache.camel.main.MainSupport : Apache Camel (JBang) 3.18.1 is starting 2022-11-24 11:27:29.706 INFO 39527 --- [ main] org.apache.camel.main.MainSupport : Using Java 11.0.17 with PID 39527. Started by squake in /home/squake/workspace/jbang/camel-blog 2022-11-24 11:27:31.391 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) is starting 2022-11-24 11:27:31.590 INFO 39527 --- [ main] org.apache.camel.main.BaseMainSupport : Property-placeholders summary 2022-11-24 11:27:31.590 INFO 39527 --- [ main] org.apache.camel.main.BaseMainSupport : [coffee-source.kamelet.yaml] period=5000 2022-11-24 11:27:31.590 INFO 39527 --- [ main] org.apache.camel.main.BaseMainSupport : [coffee-source.kamelet.yaml] templateId=coffee-source 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:2) 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started route1 (kamelet://coffee-source) 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started coffee-source-1 (timer://coffee) 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) started in 1s143ms (build:125ms init:819ms start:199ms JVM-uptime:2s) 2022-11-24 11:27:33.297 INFO 39527 --- [ - timer://coffee] coffee-integration.yaml:4 : {\"id\":3648,\"uid\":\"712d4f54-3314-4129-844e-9915002ecbb7\",\"blend_name\":\"Winter Cowboy\",\"origin\":\"Lekempti, Ethiopia\",\"variety\":\"Agaro\",\"notes\":\"delicate, juicy, sundried tomato, fresh bread, lemonade\",\"intensifier\":\"juicy\"}" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/testing_guide_camel_k/testing-camel-k-integration
Chapter 5. Example Script
Chapter 5. Example Script
[ "#!/usr/bin/env python import sys import requests from datetime import datetime, timedelta API_HOST = 'https://access.redhat.com/hydra/rest/securitydata' PROXIES = {} uncomment lines below to specify proxy server HTTPS_PROXY = \"http://yourproxy.example.com:8000\" PROXIES = { \"https\" : HTTPS_PROXY } def get_data(query): full_query = API_HOST + query r = requests.get(full_query, proxies=PROXIES) if r.status_code != 200: print('ERROR: Invalid request; returned {} for the following ' 'query:\\n{}'.format(r.status_code, full_query)) sys.exit(1) if not r.json(): print('No data returned with the following query:') print(full_query) sys.exit(0) return r.json() Get a list of issues and their impacts for RHSA-2022:1988 endpoint = '/cve.json' params = 'advisory=RHSA-2022:1988' data = get_data(endpoint + '?' + params) for cve in data: print(cve['CVE'], cve['severity']) print('-----') Get a list of kernel advisories for the last 30 days and display the packages that they provided. endpoint = '/csaf.json' date = datetime.now() - timedelta(days=30) params = 'package=kernel&after=' + str(date.date()) data = get_data(endpoint + '?' + params) kernel_advisories = [] for advisory in data: print(advisory['RHSA'], advisory['severity'], advisory['released_on']) print('-', '\\n- '.join(advisory['released_packages'])) kernel_advisories.append(advisory['RHSA']) print('-----') From the list of advisories saved in the previous example (as `kernel_advisories`), get a list of affected products for each advisory. endpoint = '/csaf/' for advisory in kernel_advisories: data = get_data(endpoint + advisory + '.json') print(advisory) for product_branch in data['product_tree']['branches']: for inner_branch in product_branch['branches'][0]['branches']: print('-', inner_branch['name'])" ]
https://docs.redhat.com/en/documentation/red_hat_security_data_api/1.0/html/red_hat_security_data_api/example_script
20.2.5. Using an FCP-attached SCSI DVD Drive
20.2.5. Using an FCP-attached SCSI DVD Drive This requires to have a SCSI DVD drive attached to an FCP-to-SCSI bridge which is in turn connected to an FCP adapter in your System z machine. The FCP adapter has to be configured and available in your LPAR. Insert your Red Hat Enterprise Linux for System z DVD into the DVD drive. Double-click Load . In the dialog box that follows, select SCSI as the Load type . As Load address fill in the device number of the FCP channel connected with the FCP-to-SCSI bridge. As World wide port name fill in the WWPN of the FCP-to-SCSI bridge as a 16-digit hexadecimal number. As Logical unit number fill in the LUN of the DVD drive as a 16-digit hexadecimal number. As Boot program selector fill in the number 1 to select the boot entry on the Red Hat Enterprise Linux for System z DVD. Leave the Boot record logical block address as 0 and the Operating system specific load parameters empty. Click the OK button.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot-Installing_in_an_LPAR-SCSI-DVD
Workloads APIs
Workloads APIs OpenShift Container Platform 4.17 Reference guide for workloads APIs Red Hat OpenShift Documentation Team
[ "\"postCommit\": { \"script\": \"rake test --verbose\", }", "The above is a convenient form which is equivalent to:", "\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }", "\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }", "Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.", "\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }", "This form is only useful if the image entrypoint can handle arguments.", "\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }", "This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.", "\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }", "This form is equivalent to appending the arguments to the Command slice.", "\"postCommit\": { \"script\": \"rake test --verbose\", }", "The above is a convenient form which is equivalent to:", "\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }", "\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }", "Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.", "\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }", "This form is only useful if the image entrypoint can handle arguments.", "\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }", "This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.", "\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }", "This form is equivalent to appending the arguments to the Command slice." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/workloads_apis/index
Chapter 5. Preparing for data loss with IdM backups
Chapter 5. Preparing for data loss with IdM backups IdM provides the ipa-backup utility to backup IdM data, and the ipa-restore utility to restore servers and data from those backups. Note Red Hat recommends running backups as often as necessary on a hidden replica with all server roles installed, especially the Certificate Authority (CA) role if the environment uses the integrated IdM CA. See Installing an IdM hidden replica . 5.1. IdM backup types With the ipa-backup utility, you can create two types of backups: Full-server backup Contains all server configuration files related to IdM, and LDAP data in LDAP Data Interchange Format (LDIF) files IdM services must be offline . Suitable for rebuilding an IdM deployment from scratch. Data-only backup Contains LDAP data in LDIF files and the replication changelog IdM services can be online or offline . Suitable for restoring IdM data to a state in the past 5.2. Naming conventions for IdM backup files By default, IdM stores backups as .tar archives in subdirectories of the /var/lib/ipa/backup/ directory. The archives and subdirectories follow these naming conventions: Full-server backup An archive named ipa-full.tar in a directory named ipa-full- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Data-only backup An archive named ipa-data.tar in a directory named ipa-data- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Note Uninstalling an IdM server does not automatically remove any backup files. 5.3. Considerations when creating a backup The important behaviors and limitations of the ipa-backup command include the following: By default, the ipa-backup utility runs in offline mode, which stops all IdM services. The utility automatically restarts IdM services after the backup is finished. A full-server backup must always run with IdM services offline, but a data-only backup can be performed with services online. By default, the ipa-backup utility creates backups on the file system containing the /var/lib/ipa/backup/ directory. Red Hat recommends creating backups regularly on a file system separate from the production filesystem used by IdM, and archiving the backups to a fixed medium, such as tape or optical storage. Consider performing backups on hidden replicas . IdM services can be shut down on hidden replicas without affecting IdM clients. The ipa-backup utility checks if all of the services used in your IdM cluster, such as a Certificate Authority (CA), Domain Name System (DNS), and Key Recovery Agent (KRA), are installed on the server where you are running the backup. If the server does not have all these services installed, the ipa-backup utility exits with a warning, because backups taken on that host would not be sufficient for a full cluster restoration. For example, if your IdM deployment uses an integrated Certificate Authority (CA), a backup run on a non-CA replica will not capture CA data. Red Hat recommends verifying that the replica where you perform an ipa-backup has all of the IdM services used in the cluster installed. You can bypass the IdM server role check with the ipa-backup --disable-role-check command, but the resulting backup will not contain all the data necessary to restore IdM fully. 5.4. Creating an IdM backup Create a full-server and data-only backup in offline and online modes using the ipa-backup command. Prerequisites You must have root privileges to run the ipa-backup utility. Procedure To create a full-server backup in offline mode, use the ipa-backup utility without additional options. To create an offline data-only backup, specify the --data option. To create a full-server backup that includes IdM log files, use the --logs option. To create a data-only backup while IdM services are running, specify both --data and --online options. Note If the backup fails due to insufficient space in the /tmp directory, use the TMPDIR environment variable to change the destination for temporary files created by the backup process: Verification Ensure the backup directory contains an archive with the backup. Additional resources ipa-backup command fails to finish (Red Hat Knowledgebase) 5.5. Creating a GPG2-encrypted IdM backup You can create encrypted backups using GNU Privacy Guard (GPG) encryption. The following procedure creates an IdM backup and encrypts it using a GPG2 key. Prerequisites You have created a GPG2 key. See Creating a GPG2 key . Procedure Create a GPG-encrypted backup by specifying the --gpg option. Verification Ensure that the backup directory contains an encrypted archive with a .gpg file extension. Additional resources Creating a backup . 5.6. Creating a GPG2 key The following procedure describes how to generate a GPG2 key to use with encryption utilities. Prerequisites You need root privileges. Procedure Install and configure the pinentry utility. Create a key-input file used for generating a GPG keypair with your preferred details. For example: Optional: By default, GPG2 stores its keyring in the ~/.gnupg file. To use a custom keyring location, set the GNUPGHOME environment variable to a directory that is only accessible by root. Generate a new GPG2 key based on the contents of the key-input file. Enter a passphrase to protect the GPG2 key. You use this passphrase to access the private key for decryption. Confirm the correct passphrase by entering it again. Verify that the new GPG2 key was created successfully. Verification List the GPG keys on the server. Additional resources GNU Privacy Guard
[ "ll /var/lib/ipa/backup/ ipa-full -2021-01-29-12-11-46 total 3056 -rw-r--r--. 1 root root 158 Jan 29 12:11 header -rw-r--r--. 1 root root 3121511 Jan 29 12:11 ipa-full.tar", "ll /var/lib/ipa/backup/ ipa-data -2021-01-29-12-14-23 total 1072 -rw-r--r--. 1 root root 158 Jan 29 12:14 header -rw-r--r--. 1 root root 1090388 Jan 29 12:14 ipa-data.tar", "ipa-backup Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Backed up to /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 The ipa-backup command was successful", "ipa-backup --data", "ipa-backup --logs", "ipa-backup --data --online", "TMPDIR=/new/location ipa-backup", "ls /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 header ipa-full.tar", "ipa-backup --gpg Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Encrypting /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00/ipa-full.tar Backed up to /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 The ipa-backup command was successful", "ls /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 header ipa-full.tar.gpg", "dnf install pinentry mkdir ~/.gnupg -m 700 echo \"pinentry-program /usr/bin/pinentry-curses\" >> ~/.gnupg/gpg-agent.conf", "cat >key-input <<EOF %echo Generating a standard key Key-Type: RSA Key-Length: 2048 Name-Real: GPG User Name-Comment: first key Name-Email: [email protected] Expire-Date: 0 %commit %echo Finished creating standard key EOF", "export GNUPGHOME= /root/backup mkdir -p USDGNUPGHOME -m 700", "gpg2 --batch --gen-key key-input", "┌──────────────────────────────────────────────────────┐ │ Please enter the passphrase to │ │ protect your new key │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘", "┌──────────────────────────────────────────────────────┐ │ Please re-enter this passphrase │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘", "gpg: keybox '/root/backup/pubring.kbx' created gpg: Generating a standard key gpg: /root/backup/trustdb.gpg: trustdb created gpg: key BF28FFA302EF4557 marked as ultimately trusted gpg: directory '/root/backup/openpgp-revocs.d' created gpg: revocation certificate stored as '/root/backup/openpgp-revocs.d/8F6FCF10C80359D5A05AED67BF28FFA302EF4557.rev' gpg: Finished creating standard key", "gpg2 --list-secret-keys gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u / root /backup/pubring.kbx ------------------------ sec rsa2048 2020-01-13 [SCEA] 8F6FCF10C80359D5A05AED67BF28FFA302EF4557 uid [ultimate] GPG User (first key) <[email protected]>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/preparing_for_disaster_recovery_with_identity_management/preparing-for-data-loss-with-idm-backups_preparing-for-disaster-recovery
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_vm_environments/providing-feedback
Administration guide for Red Hat Developer Hub
Administration guide for Red Hat Developer Hub Red Hat Developer Hub 1.3 Red Hat Customer Content Services
[ "kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub", "... other Red Hat Developer Hub Helm Chart configurations upstream: backstage: extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml ... other Red Hat Developer Hub Helm Chart configurations", "kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: <RHDH_URL> 1 backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" 2 baseUrl: <RHDH_URL> 3 cors: origin: <RHDH_URL> 4", "node -p 'require(\"crypto\").randomBytes(24).toString(\"base64\")'", "apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: developer-hub spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: \"<db-port>\" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: <crt-secret> 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> 3 #", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: \"<db-port>\" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF", "upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: <cred-secret> # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: USD{POSTGRES_HOST} port: USD{POSTGRES_PORT} user: USD{POSTGRES_USER} password: USD{POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: USDfile: /opt/app-root/src/postgres-ca.pem key: USDfile: /opt/app-root/src/postgres-key.key cert: USDfile: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - <cred-secret> # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include \"janus-idp.backend-secret-name\" USD }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: dynamic-plugins-npmrc - name: postgres-crt secret: secretName: <crt-secret> 7 #", "helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.3.5", "port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>", "port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432", "#!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=(\"backstage_plugin_app\" \"backstage_plugin_auth\" \"backstage_plugin_catalog\" \"backstage_plugin_permission\" \"backstage_plugin_scaffolder\" \"backstage_plugin_search\") 7 for db in USD{!allDB[@]}; do db=USD{allDB[USDdb]} echo Copying database: USDdb PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -c \"create database USDdb;\" pg_dump -h USDfrom_host -p USDfrom_port -U USDfrom_user -d USDdb | PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -d USDdb done", "/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1", "spec: database: enableLocalDb: false application: # extraFiles: secrets: - name: <crt-secret> key: postgres-crt.pem # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret>", "-n developer-hub delete pvc <local-psql-pvc-name>", "get pods -n <your-namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #", "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false 1 - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false 2 #", "kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | # catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target-cluster-api-server-url> 1 name: openshift authProvider: 'serviceAccount' skipTLSVerify: false 2 skipMetricsLookup: true dashboardUrl: <target-cluster-console-url> 3 dashboardApp: openshift serviceAccountToken: USD{K8S_SERVICE_ACCOUNT_TOKEN} 4 caData: USD{K8S_CONFIG_CA_DATA} 5 #", "-n rhdh-operator get pods -w", "upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: '<http_proxy_url>' - name: HTTPS_PROXY value: '<https_proxy_url>' - name: NO_PROXY value: '<no_proxy_settings>'", "upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'", "Other fields omitted deployment.yaml: |- apiVersion: apps/v1 kind: Deployment spec: template: spec: # Other fields omitted initContainers: - name: install-dynamic-plugins # command omitted env: - name: NPM_CONFIG_USERCONFIG value: /opt/app-root/src/.npmrc.dynamic-plugins - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org' # Other fields omitted containers: - name: backstage-backend # Other fields omitted env: - name: APP_CONFIG_backend_listen_port value: \"7007\" - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'", "spec: # Other fields omitted application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'", "ui:options: allowedHosts: - github.com", "apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: template-name 1 title: Example template 2 description: An example template for v1beta3 scaffolder. 3 spec: owner: backstage/techdocs-core 4 type: service 5 parameters: 6 - title: Fill in some steps required: - name properties: name: title: Name type: string description: Unique name of the component owner: title: Owner type: string description: Owner of the component - title: Choose a location required: - repoUrl properties: repoUrl: title: Repository Location type: string steps: 7 - id: fetch-base name: Fetch Base action: fetch:template # output: 8 links: - title: Repository 9 url: USD{{ steps['publish'].output.remoteUrl }} - title: Open in catalog 10 icon: catalog entityRef: USD{{ steps['register'].output.entityRef }}", "catalog: rules: - allow: [Template] 1 locations: - type: url 2 target: https://<repository_url>/example-template.yaml 3", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <rhdh_bucket_claim_name> spec: generateBucketName: <rhdh_bucket_claim_name> storageClassName: openshift-storage.noobaa.io", "upstream: backstage: extraEnvVarsSecrets: - <rhdh_bucket_claim_name> extraEnvVarsCM: - <rhdh_bucket_claim_name>", "global: dynamic: includes: - 'dynamic-plugins.default.yaml' plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: 'USD{BUCKET_NAME}' credentials: accessKeyId: 'USD{AWS_ACCESS_KEY_ID}' secretAccessKey: 'USD{AWS_SECRET_ACCESS_KEY}' endpoint: 'https://USD{BUCKET_HOST}' region: 'USD{BUCKET_REGION}' s3ForcePathStyle: true type: awsS3", "apiVersion: objectbucket.io/v1alpha1 kind: Backstage metadata: name: <name> spec: application: extraEnvs: configMaps: - name: <rhdh_bucket_claim_name> secrets: - name: <rhdh_bucket_claim_name>", "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: 'USD{BUCKET_NAME}' credentials: accessKeyId: 'USD{AWS_ACCESS_KEY_ID}' secretAccessKey: 'USD{AWS_SECRET_ACCESS_KEY}' endpoint: 'https://USD{BUCKET_HOST}' region: 'USD{BUCKET_REGION}' s3ForcePathStyle: true type: awsS3", "Prepare REPOSITORY_URL='https://github.com/org/repo' git clone USDREPOSITORY_URL cd repo Install @techdocs/cli, mkdocs and mkdocs plugins npm install -g @techdocs/cli pip install \"mkdocs-techdocs-core==1.*\" Generate techdocs-cli generate --no-docker Publish techdocs-cli publish --publisher-type awsS3 --storage-name <bucket/container> --entity <Namespace/Kind/Name>", "git clone <https://path/to/docs-repository/>", "npm install -g npx", "npm install -g @techdocs/cli", "pip install \"mkdocs-techdocs-core==1.*\"", "npx @techdocs/cli generate --no-docker --source-dir <path_to_repo> --output-dir ./site", "npx @techdocs/cli publish --publisher-type <awsS3|googleGcs> --storage-name <bucket/container> --entity <namespace/kind/name> --directory ./site", "name: Publish TechDocs Site on: push: branches: [main] # You can even set it to run only when TechDocs related files are updated. # paths: # - \"docs/**\" # - \"mkdocs.yml\" jobs: publish-techdocs-site: runs-on: ubuntu-latest # The following secrets are required in your CI environment for publishing files to AWS S3. # e.g. You can use GitHub Organization secrets to set them for all existing and new repositories. env: TECHDOCS_S3_BUCKET_NAME: USD{{ secrets.TECHDOCS_S3_BUCKET_NAME }} AWS_ACCESS_KEY_ID: USD{{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: USD{{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: USD{{ secrets.AWS_REGION }} ENTITY_NAMESPACE: 'default' ENTITY_KIND: 'Component' ENTITY_NAME: 'my-doc-entity' # In a Software template, Scaffolder will replace {{cookiecutter.component_id | jsonify}} # with the correct entity name. This is same as metadata.name in the entity's catalog-info.yaml # ENTITY_NAME: '{{ cookiecutter.component_id | jsonify }}' steps: - name: Checkout code uses: actions/checkout@v3 - uses: actions/setup-node@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install techdocs-cli run: sudo npm install -g @techdocs/cli - name: Install mkdocs and mkdocs plugins run: python -m pip install mkdocs-techdocs-core==1.* - name: Generate docs site run: techdocs-cli generate --no-docker --verbose - name: Publish docs site run: techdocs-cli publish --publisher-type awsS3 --storage-name USDTECHDOCS_S3_BUCKET_NAME --entity USDENTITY_NAMESPACE/USDENTITY_KIND/USDENTITY_NAME", "apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template:", "apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: metadata: labels: my: true", "apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend volumeMounts: - mountPath: /my/path name: my-volume volumes: - ephemeral: volumeClaimTemplate: spec: storageClassName: \"special\" name: my-volume", "apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - USDpatch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root", "apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend resources: requests: cpu: 250m", "apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: my-sidecar image: quay.io/my-org/my-sidecar:latest" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html-single/administration_guide_for_red_hat_developer_hub/index
Appendix B. Accessing Red Hat Documentation
Appendix B. Accessing Red Hat Documentation B.1. Product Documentation Red Hat Product Documentation located at https://access.redhat.com/site/documentation/ serves as a central source of information. It is currently translated in 22 languages and for each product, it provides different kinds of books from release and technical notes to installation, user, and reference guides in HTML, PDF, and EPUB formats. The following is a brief list of documents that are directly or indirectly relevant to this book: The Red Hat Enterprise Linux 7 System Administrator's Guide , available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/System_Administrators_Guide/index.html , contains detailed information about various system components, for instance the GRUB 2 boot loader, package management, systemd , or printer configuration. The Red Hat Enterprise Linux 7 Installation Guide , available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/index.html , contains detailed information about installing Red Hat Enterprise Linux 7 and using the Anaconda installer. The Red Hat Enterprise Linux 7 Migration Planning Guide , available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Migration_Planning_Guide/index.html , contains an overview of major changes in behavior and compatibility between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. The Migration Planning Guide also introduces the tools provided by Red Hat to assist with upgrades to Red Hat Enterprise Linux 7. The Red Hat Enterprise Linux 7 Networking Guide , available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Networking_Guide/index.html , contains information about configuration and administration of networking for Red Hat Enterprise Linux 7. The Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide , available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/ , contains information about installing, configuring, and managing Red Hat Enterprise Linux virtualization.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/access-red-hat-documentation
Preface
Preface Use the Troubleshooting Ansible Automation Platform guide to troubleshoot your Ansible Automation Platform installation.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/pr01
Chapter 1. Introduction to Control Groups (Cgroups)
Chapter 1. Introduction to Control Groups (Cgroups) 1.1. What are Control Groups The control groups , abbreviated as cgroups in this guide, are a Linux kernel feature that allows you to allocate resources - such as CPU time, system memory, network bandwidth, or combinations of these resources - among hierarchically ordered groups of processes running on a system. By using cgroups, system administrators gain fine-grained control over allocating, prioritizing, denying, managing, and monitoring system resources. Hardware resources can be smartly divided up among applications and users, increasing overall efficiency. Control Groups provide a way to hierarchically group and label processes, and to apply resource limits to them. Traditionally, all processes received similar amounts of system resources that the administrator could modulate with the process niceness value. With this approach, applications that involved a large number of processes received more resources than applications with few processes, regardless of the relative importance of these applications. Red Hat Enterprise Linux 7 moves the resource management settings from the process level to the application level by binding the system of cgroup hierarchies with the systemd unit tree. Therefore, you can manage system resources with systemctl commands, or by modifying systemd unit files. See Chapter 2, Using Control Groups for details. In versions of Red Hat Enterprise Linux, system administrators built custom cgroup hierarchies with the use of the cgconfig command from the libcgroup package. This package is now deprecated, and it is not recommended to use it since it can easily create conflicts with the default cgroup hierarchy. However, libcgroup is still available to cover for certain specific cases, where systemd is not yet applicable, most notably for using the net-prio subsystem. See Chapter 3, Using libcgroup Tools . The aforementioned tools provide a high-level interface to interact with cgroup controllers (also known as subsystems) in Linux kernel. The main cgroup controllers for resource management are cpu , memory , and blkio , see Available Controllers in Red Hat Enterprise Linux 7 for the list of controllers enabled by default. For detailed description of resource controllers and their configurable parameters, see Controller-Specific Kernel Documentation .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/chap-introduction_to_control_groups
5.10. Determining Device Mapper Entries with the dmsetup Command
5.10. Determining Device Mapper Entries with the dmsetup Command You can use the dmsetup command to find out which device mapper entries match the multipathed devices. The following command displays all the device mapper devices and their major and minor numbers. The minor numbers determine the name of the dm device. For example, a minor number of 3 corresponds to the multipathed device /dev/dm-3 .
[ "dmsetup ls mpathd (253:4) mpathep1 (253:12) mpathfp1 (253:11) mpathb (253:3) mpathgp1 (253:14) mpathhp1 (253:13) mpatha (253:2) mpathh (253:9) mpathg (253:8) VolGroup00-LogVol01 (253:1) mpathf (253:7) VolGroup00-LogVol00 (253:0) mpathe (253:6) mpathbp1 (253:10) mpathd (253:5)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/dmsetup_queries
8.207. spice-xpi
8.207. spice-xpi 8.207.1. RHEA-2013:1667 - spice-xpi bug fix and enhancement update Updated spice-xpi packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The spice-xpi packages provide the Simple Protocol for Independent Computing Environments (SPICE) extension for Mozilla that allows the SPICE client to be used from a web browser. Bug Fix BZ# 882339 Prior to this update, the spice-xpi browser plug-in did not remove the /tmp/spicec-XXXXXX/spice-foreign socket and the /tmp/spicec-XXXXXX/ directory, so they were still present after client had exited. This bug has been fixed, and the browser plug-in now removes the above mentioned file and directory after client exits. Enhancement BZ# 994613 Proxy support for SPICE connection has been added to the spice-xpi browser plug-in. With this update, spice-xpi is now able to pass the proxy setting to the SPICE client it spawns, for example, when opening a console from the Red Hat Enterprise Virtualization Manager portal. Users of spice-xpi are advised to upgrade to these updated packages, which fix this bug and add this enhancement. After installing the update, Firefox must be restarted for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/spice-xpi
5.3. Deployment
5.3. Deployment 389-ds-base component, BZ# 878111 The ns-slapd utility terminates unexpectedly if it cannot rename the dirsrv- <instance> log files in the /var/log/ directory due to incorrect permissions on the directory. cpuspeed component, BZ# 626893 Some HP Proliant servers may report incorrect CPU frequency values in /proc/cpuinfo or /sys/device/system/cpu/*/cpufreq . This is due to the firmware manipulating the CPU frequency without providing any notification to the operating system. To avoid this ensure that the HP Power Regulator option in the BIOS is set to OS Control . An alternative available on more recent systems is to set Collaborative Power Control to Enabled . releng component, BZ# 644778 Some packages in the Optional repositories on RHN have multilib file conflicts. Consequently, these packages cannot have both the primary architecture (for example, x86_64) and secondary architecture (for example, i686) copies of the package installed on the same machine simultaneously. To work around this issue, install only one copy of the conflicting package. grub component, BZ# 695951 On certain UEFI-based systems, you may need to type BOOTX64 rather than bootx64 to boot the installer due to case sensitivity issues. grub component, BZ# 698708 When rebuilding the grub package on the x86_64 architecture, the glibc-static.i686 package must be used. Using the glibc-static.x86_64 package will not meet the build requirements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/deployment_issues
3.11. Enhanced Graphics Power Management
3.11. Enhanced Graphics Power Management Red Hat Enterprise Linux 7 saves power on graphics and display devices by eliminating several sources of unnecessary consumption. LVDS reclocking Low-voltage differential signaling (LVDS) is an system for carrying electronic signals over copper wire. One significant application of the system is to transmit pixel information to liquid crystal display (LCD) screens in notebook computers. All displays have a refresh rate - the rate at which they receive fresh data from a graphics controller and redraw the image on the screen. Typically, the screen receives fresh data sixty times per second (a frequency of 60 Hz). When a screen and graphics controller are linked by LVDS, the LVDS system uses power on every refresh cycle. When idle, the refresh rate of many LCD screens can be dropped to 30 Hz without any noticeable effect (unlike cathode ray tube (CRT) monitors, where a decrease in refresh rate produces a characteristic flicker). The driver for Intel graphics adapters built into the kernel used in Red Hat Enterprise Linux 7 performs this downclocking automatically, and saves around 0.5 W when the screen is idle. Enabling memory self-refresh Synchronous dynamic random access memory (SDRAM) - as used for video memory in graphics adapters - is recharged thousands of times per second so that individual memory cells retain the data that is stored in them. Apart from its main function of managing data as it flows in and out of memory, the memory controller is normally responsible for initiating these refresh cycles. However, SDRAM also has a low-power self-refresh mode. In this mode, the memory uses an internal timer to generate its own refresh cycles, which allows the system to shut down the memory controller without endangering data currently held in memory. The kernel used in Red Hat Enterprise Linux 7 can trigger memory self-refresh in Intel graphics adapters when they are idle, which saves around 0.8 W. GPU clock reduction Typical graphical processing units (GPUs) contain internal clocks that govern various parts of their internal circuitry. The kernel used in Red Hat Enterprise Linux 7 can reduce the frequency of some of the internal clocks in Intel and ATI GPUs. Reducing the number of cycles that GPU components perform in a given time saves the power that they would have consumed in the cycles that they did not have to perform. The kernel automatically reduces the speed of these clocks when the GPU is idle, and increases it when GPU activity increases. Reducing GPU clock cycles can save up to 5 W. GPU powerdown The Intel and ATI graphics drivers in Red Hat Enterprise Linux 7 can detect when no monitor is attached to an adapter and therefore shut down the GPU completely. This feature is especially significant for servers which do not have monitors attached to them regularly.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/enhanced_graphics_power_management
Chapter 3. Important update on odo
Chapter 3. Important update on odo Red Hat does not provide information about odo on the OpenShift Container Platform documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cli_tools/developer-cli-odo
2.3. Documentation for Linux Gurus
2.3. Documentation for Linux Gurus If you are concerned with the finer points and specifics of the Red Hat Enterprise Linux system, the Reference Guide is a great resource. If you are a long-time Red Hat Enterprise Linux user, you probably already know that one of the best ways to understand a particular program is to read its source code and/or configuration files. A major advantage of Red Hat Enterprise Linux is the availability of the source code for anyone to read. Obviously, not everyone is a programmer, so the source code may not be helpful for you. However, if you have the knowledge and skills necessary to read it, the source code holds all of the answers.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-intro-guru
10.7. Converting Masters and Clones
10.7. Converting Masters and Clones Only one single active CA generating CRLs can exist within the same topology. Similarly, only one OCSP receiving CRLs can exist within the same topology. As such, there can be any number of clones, but there can only be a single configured master for CA and OCSP. For KRAs and TKSs, there is no configuration difference between masters and clones, but CAs and OCSPs do have some configuration differences. This means that when a master is taken offline - because of a failure or for maintenance or to change the function of the subsystem in the PKI - then the existing master must be reconfigured to be a clone, and one of the clones promoted to be the master. 10.7.1. Converting CA Clones and Masters Stop the master CA if it is still running. Open the existing master CA configuration directory: Edit the CS.cfg file for the master, and change the CRL and maintenance thread settings so that it is set as a clone: Disable control of the database maintenance thread: Disable monitoring database replication changes: Disable maintenance of the CRL cache: Disable CRL generation: Set the CA to redirect CRL requests to the new master: Stop the cloned CA server. Open the cloned CA's configuration directory. Edit the CS.cfg file to configure the clone as the new master. Delete each line which begins with the ca.crl. prefix. Copy each line beginning with the ca.crl. prefix from the former master CA CS.cfg file into the cloned CA's CS.cfg file. Enable control of the database maintenance thread; the default value for a master CA is 600 . Enable monitoring database replication: Enable maintenance of the CRL cache: Enable CRL generation: Disable the redirect settings for CRL generation requests: Start the new master CA server. 10.7.2. Converting OCSP Clones Stop the OCSP master, if it is still running. Open the existing master OCSP configuration directory. Edit the CS.cfg , and reset the OCSP.Responder.store.defStore.refreshInSec parameter to 21600 : Stop the online cloned OCSP server. Open the cloned OCSP responder's configuration directory. Open the CS.cfg file, and delete the OCSP.Responder.store.defStore.refreshInSec parameter or change its value to any non-zero number: Start the new master OCSP responder server.
[ "cd /var/lib/pki/ instance_name /ca/conf", "ca.certStatusUpdateInterval=0", "ca.listenToCloneModifications=false", "ca.crl. IssuingPointId .enableCRLCache=false", "ca.crl. IssuingPointId .enableCRLUpdates=false", "master.ca.agent.host= new_master_hostname master.ca.agent.port= new_master_port", "pki-server stop instance_name", "cd /etc/ instance_name", "ca.certStatusUpdateInterval=600", "ca.listenToCloneModifications=true", "ca.crl. IssuingPointId .enableCRLCache=true", "ca.crl. IssuingPointId .enableCRLUpdates=true", "master.ca.agent.host= hostname master.ca.agent.port= port number", "pki-server start instance_name", "cd /etc/ instance_name", "OCSP.Responder.store.defStore.refreshInSec=21600", "pki-server stop instance_name", "cd /etc/ instance_name", "OCSP.Responder.store.defStore.refreshInSec=15000", "pki-server start instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/converting-masters-and-clones
10.12. Troubleshooting Geo-replication
10.12. Troubleshooting Geo-replication This section describes the most common troubleshooting scenarios related to geo-replication. 10.12.1. Tuning Geo-replication performance with Change Log There are options for the change log that can be configured to give better performance in a geo-replication environment. The rollover-time option sets the rate at which the change log is consumed. The default rollover time is 15 seconds, but it can be configured to a faster rate. A recommended rollover-time for geo-replication is 10-15 seconds. To change the rollover-time option, use following the command: The fsync-interval option determines the frequency that updates to the change log are written to disk. The default interval is 5, which means that updates to the change log are written synchronously as they occur, and this may negatively impact performance in a geo-replication environment. Configuring fsync-interval to a non-zero value will write updates to disk asynchronously at the specified interval. To change the fsync-interval option, use following the command: 10.12.2. Triggering Explicit Sync on Entries Geo-replication provides an option to explicitly trigger the sync operation of files and directories. A virtual extended attribute glusterfs.geo-rep.trigger-sync is provided to accomplish the same. The support of explicit trigger of sync is supported only for directories and regular files. 10.12.3. Synchronization Is Not Complete Situation The geo-replication status is displayed as Stable , but the data has not been completely synchronized. Solution A full synchronization of the data can be performed by erasing the index and restarting geo-replication. After restarting geo-replication, it will begin a synchronization of the data using checksums. This may be a long and resource intensive process on large data sets. If the issue persists, contact Red Hat Support. For more information about erasing the index, see Section 11.1, "Configuring Volume Options" . 10.12.4. Issues with File Synchronization Situation The geo-replication status is displayed as Stable , but only directories and symlinks are synchronized. Error messages similar to the following are in the logs: Solution Geo-replication requires rsync v3.0.0 or higher on the host and the remote machines. Verify if you have installed the required version of rsync . 10.12.5. Geo-replication Status is Often Faulty Situation The geo-replication status is often displayed as Faulty , with a backtrace similar to the following: Solution This usually indicates that RPC communication between the master gsyncd module and slave gsyncd module is broken. Make sure that the following prerequisites are met: Key-based SSH authentication is set up properly between the host and remote machines. FUSE is installed on the machines. The geo-replication module mounts Red Hat Gluster Storage volumes using FUSE to sync data. 10.12.6. Intermediate Master is in a Faulty State Situation In a cascading environment, the intermediate master is in a faulty state, and messages similar to the following are in the log: Solution In a cascading configuration, an intermediate master is loyal to its original primary master. The above log message indicates that the geo-replication module has detected that the primary master has changed. If this change was deliberate, delete the volume-id configuration option in the session that was initiated from the intermediate master. 10.12.7. Remote gsyncd Not Found Situation The master is in a faulty state, and messages similar to the following are in the log: Solution The steps to configure a SSH connection for geo-replication have been updated. Use the steps as described in Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session"
[ "gluster volume set VOLNAME rollover-time 15", "gluster volume set VOLNAME fsync-interval 5", "setfattr -n glusterfs.geo-rep.trigger-sync -v \"1\" <file-path>", "[2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to sync ./some_file`", "012-09-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File \"/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py\", line 152, in twraptf(*aa) File \"/usr/local/libexec/glusterfs/python/syncdaemon/repce.py\", line 118, in listen rid, exc, res = recv(self.inf) File \"/usr/local/libexec/glusterfs/python/syncdaemon/repce.py\", line 42, in recv return pickle.load(inf) EOFError", "raise RuntimeError (\"aborting on uuid change from %s to %s\" % \\ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154", "[2012-04-04 03:41:40.324496] E [resource:169:errfail] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Troubleshooting_Geo-replication
Streams for Apache Kafka on OpenShift Overview
Streams for Apache Kafka on OpenShift Overview Red Hat Streams for Apache Kafka 2.7 Discover the features and functions of Streams for Apache Kafka 2.7 on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_on_openshift_overview/index
3.5.2. Attaching Users to Groups
3.5.2. Attaching Users to Groups If you want to add an existing user to the named group, you can make use of the gpasswd command. To remove a user from the named group, run: To set the list of group members, write the user names after the --members option dividing them with commas and no spaces:
[ "gpasswd -a username which_group_to_edit", "gpasswd -d username which_group_to_edit", "gpasswd --members username_1 , username_2 which_group_to_edit" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/cl-tools-gpasswd
Appendix A. Further Information
Appendix A. Further Information A.1. SELinux and sVirt Further information on SELinux and sVirt: Main SELinux website: http://www.nsa.gov/research/selinux/index.shtml . SELinux documentation: http://www.nsa.gov/research/selinux/docs.shtml . Main sVirt website: http://selinuxproject.org/page/SVirt . Dan Walsh's blog: http://danwalsh.livejournal.com/ . The unofficial SELinux FAQ: http://www.crypt.gen.nz/selinux/faq.html .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/chap-virtualization_security_guide-further_information
Chapter 2. Planning your undercloud
Chapter 2. Planning your undercloud Before you configure and install director on the undercloud, you must plan your undercloud host to ensure it meets certain requirements. 2.1. Containerized undercloud The undercloud is the node that controls the configuration, installation, and management of your final Red Hat OpenStack Platform (RHOSP) environment, which is called the overcloud. The undercloud itself uses OpenStack Platform components in the form of containers to create a toolset called director. This means that the undercloud pulls a set of container images from a registry source, generates configuration for the containers, and runs each OpenStack Platform service as a container. As a result, the undercloud provides a containerized set of services that you can use as a toolset to create and manage your overcloud. Since both the undercloud and overcloud use containers, both use the same architecture to pull, configure, and run containers. This architecture is based on the OpenStack Orchestration service (heat) for provisioning nodes and uses Ansible to configure services and containers. It is useful to have some familiarity with heat and Ansible to help you troubleshoot issues that you might encounter. 2.2. Preparing your undercloud networking The undercloud requires access to two main networks: The Provisioning or Control Plane network , which is the network that director uses to provision your nodes and access them over SSH when executing Ansible configuration. This network also enables SSH access from the undercloud to overcloud nodes. The undercloud contains DHCP services for introspection and provisioning other nodes on this network, which means that no other DHCP services should exist on this network. The director configures the interface for this network. The External network , which enables access to OpenStack Platform repositories, container image sources, and other servers such as DNS servers or NTP servers. Use this network for standard access the undercloud from your workstation. You must manually configure an interface on the undercloud to access the external network. The undercloud requires a minimum of 2 x 1 Gbps Network Interface Cards: one for the Provisioning or Control Plane network and one for the External network . When you plan your network, review the following guidelines: Red Hat recommends using one network for provisioning and the control plane and another network for the data plane. The provisioning and control plane network can be configured on top of a Linux bond or on individual interfaces. If you use a Linux bond, configure it as an active-backup bond type. On non-controller nodes, the amount of traffic is relatively low on provisioning and control plane networks, and they do not require high bandwidth or load balancing. On Controllers, the provisioning and control plane networks need additional bandwidth. The reason for increased bandwidth is that Controllers serve many nodes in other roles. More bandwidth is also required when frequent changes are made to the environment. For best performance, Controllers with more than 50 compute nodes- or if more than four bare metal nodes are provisioned simultaneously- should have 4-10 times the bandwidth than the interfaces on the non-controller nodes. The undercloud should have a higher bandwidth connection to the provisioning network when more than 50 overcloud nodes are provisioned. Do not use the same Provisioning or Control Plane NIC as the one that you use to access the director machine from your workstation. The director installation creates a bridge by using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system. The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range: Include at least one temporary IP address for each node that connects to the Provisioning network during introspection. Include at least one permanent IP address for each node that connects to the Provisioning network during deployment. Include an extra IP address for the virtual IP of the overcloud high availability cluster on the Provisioning network. Include additional IP addresses within this range for scaling the environment. To prevent a Controller node network card or network switch failure disrupting overcloud services availability, ensure that the keystone admin endpoint is located on a network that uses bonded network cards or networking hardware redundancy. If you move the keystone endpoint to a different network, such as internal_api , ensure that the undercloud can reach the VLAN or subnet. For more information, see the Red Hat Knowledgebase solution How to migrate Keystone Admin Endpoint to internal_api network . 2.3. Determining environment scale Before you install the undercloud, determine the scale of your environment. Include the following factors when you plan your environment: How many nodes do you want to deploy in your overcloud? The undercloud manages each node within an overcloud. Provisioning overcloud nodes consumes resources on the undercloud. You must provide your undercloud with enough resources to adequately provision and control all of your overcloud nodes. How many simultaneous operations do you want the undercloud to perform? Most OpenStack services on the undercloud use a set of workers. Each worker performs an operation specific to that service. Multiple workers provide simultaneous operations. The default number of workers on the undercloud is determined by halving the total CPU thread count on the undercloud. In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value. For example, if your undercloud has a CPU with 16 threads, then the director services spawn 8 workers by default. Director also uses a set of minimum and maximum caps by default: Service Minimum Maximum OpenStack Orchestration (heat) 4 24 All other service 2 12 The undercloud has the following minimum CPU and memory requirements: An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This provides 4 workers for each undercloud service. A minimum of 24 GB of RAM. To use a larger number of workers, increase the vCPUs and memory of your undercloud using the following recommendations: Minimum: Use 1.5 GB of memory for each thread. For example, a machine with 48 threads requires 72 GB of RAM to provide the minimum coverage for 24 heat workers and 12 workers for other services. Recommended: Use 3 GB of memory for each thread. For example, a machine with 48 threads requires 144 GB of RAM to provide the recommended coverage for 24 heat workers and 12 workers for other services. 2.4. Undercloud disk sizing The recommended minimum undercloud disk size is 100 GB of available disk space on the root disk: 20 GB for container images 10 GB to accommodate QCOW2 image conversion and caching during the node provisioning process 70 GB+ for general usage, logging, metrics, and growth 2.5. Virtualization support Red Hat only supports a virtualized undercloud on the following platforms: Platform Notes Kernel-based Virtual Machine (KVM) Hosted by Red Hat Enterprise Linux, as listed on Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM Red Hat Virtualization Hosted by Red Hat Virtualization 4.x, as listed on Certified Red Hat Hypervisors . Microsoft Hyper-V Hosted by versions of Hyper-V as listed on the Red Hat Customer Portal Certification Catalogue . VMware ESX and ESXi Hosted by versions of ESX and ESXi as listed on the Red Hat Customer Portal Certification Catalogue . Important Ensure your hypervisor supports Red Hat Enterprise Linux 9.0 guests. Virtual machine requirements Resource requirements for a virtual undercloud are similar to those of a bare-metal undercloud. Consider the various tuning options when provisioning such as network model, guest CPU capabilities, storage backend, storage format, and caching mode. Network considerations Power management The undercloud virtual machine (VM) requires access to the overcloud nodes' power management devices. This is the IP address set for the pm_addr parameter when registering nodes. Provisioning network The NIC used for the provisioning network, ctlplane , requires the ability to broadcast and serve DHCP requests to the NICs of the overcloud's bare-metal nodes. Create a bridge that connects the VM's NIC to the same network as the bare metal NICs. Allow traffic from an unknown address You must configure your virtual undercloud hypervisor to prevent the hypervisor blocking the undercloud from transmitting traffic from an unknown address. The configuration depends on the platform you are using for your virtual undercloud: Red Hat Enterprise Virtualization: Disable the anti-mac-spoofing parameter. VMware ESX or ESXi: On IPv4 ctlplane network: Allow forged transmits. On IPv6 ctlplane network: Allow forged transmits, MAC address changes, and promiscuous mode operation. For more information about how to configure VMware ESX or ESXi, see Securing vSphere Standard Switches on the VMware docs website. You must power off and on the director VM after you apply these settings. It is not sufficient to only reboot the VM. 2.6. Character encoding configuration Red Hat OpenStack Platform has special character encoding requirements as part of the locale settings: Use UTF-8 encoding on all nodes. Ensure the LANG environment variable is set to en_US.UTF-8 on all nodes. Avoid using non-ASCII characters if you use Red Hat Ansible Tower to automate the creation of Red Hat OpenStack Platform resources. 2.7. Considerations when running the undercloud with a proxy Running the undercloud with a proxy has certain limitations, and Red Hat recommends that you use Red Hat Satellite for registry and package management. However, if your environment uses a proxy, review these considerations to best understand the different configuration methods of integrating parts of Red Hat OpenStack Platform with a proxy and the limitations of each method. System-wide proxy configuration Use this method to configure proxy communication for all network traffic on the undercloud. To configure the proxy settings, edit the /etc/environment file and set the following environment variables: http_proxy The proxy that you want to use for standard HTTP requests. https_proxy The proxy that you want to use for HTTPs requests. no_proxy A comma-separated list of domains that you want to exclude from proxy communications. The system-wide proxy method has the following limitations: The maximum length of no_proxy is 1024 characters due to a fixed size buffer in the pam_env PAM module. dnf proxy configuration Use this method to configure dnf to run all traffic through a proxy. To configure the proxy settings, edit the /etc/dnf/dnf.conf file and set the following parameters: proxy The URL of the proxy server. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password that you want to use to connect to the proxy server. proxy_auth_method The authentication method used by the proxy server. For more information about these options, run man dnf.conf . The dnf proxy method has the following limitations: This method provides proxy support only for dnf . The dnf proxy method does not include an option to exclude certain hosts from proxy communication. Red Hat Subscription Manager proxy Use this method to configure Red Hat Subscription Manager to run all traffic through a proxy. To configure the proxy settings, edit the /etc/rhsm/rhsm.conf file and set the following parameters: proxy_hostname Host for the proxy. proxy_scheme The scheme for the proxy when writing out the proxy to repo definitions. proxy_port The port for the proxy. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password to use for connecting to the proxy server. no_proxy A comma-separated list of hostname suffixes for specific hosts that you want to exclude from proxy communication. For more information about these options, run man rhsm.conf . The Red Hat Subscription Manager proxy method has the following limitations: This method provides proxy support only for Red Hat Subscription Manager. The values for the Red Hat Subscription Manager proxy configuration override any values set for the system-wide environment variables. Transparent proxy If your network uses a transparent proxy to manage application layer traffic, you do not need to configure the undercloud itself to interact with the proxy because proxy management occurs automatically. A transparent proxy can help overcome limitations associated with client-based proxy configuration in Red Hat OpenStack Platform. 2.8. Undercloud repositories You run Red Hat OpenStack Platform 17.0 on Red Hat Enterprise Linux 9.0. As a result, you must lock the content from these repositories to the respective Red Hat Enterprise Linux version. Warning Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Note Satellite repositories are not listed because RHOSP 17.0 does not support Satellite. Satellite support is planned for a future release. Only Red Hat CDN is supported as a package repository and container registry. Core repositories The following table lists core repositories for installing the undercloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability. Red Hat OpenStack Platform 17.0 for RHEL 9 (RPMs) openstack-17-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository, which contains packages for Red Hat OpenStack Platform director. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_planning-your-undercloud
Chapter 7. System filtering and groups
Chapter 7. System filtering and groups Red Hat Insights for Red Hat Enterprise Linux enables you to filter systems in inventory, as well as by individual service. Insights for Red Hat Enterprise Linux also allows you to filter groups of systems by three criteria: Groups running SAP workloads Satellite host groups Custom filters that you define in a YAML file Note As of Spring 2022, inventory, advisor, compliance, vulnerability, patch, and policies enable filtering by groups and tags. Other services will follow. Use the global Filter Results box to filter by SAP workloads, Satellite host groups, or custom filters added to the Insights client configuration and file filters added to the Insights client configuration file. Prerequisites You have completed the following steps on your system: Logged in with root-level permissions Installed the Insights client 7.1. SAP workloads As Linux becomes the mandatory operating system for SAP ERP workloads in 2025, Red Hat Enterprise Linux and Red Hat Insights for Red Hat Enterprise Linux are working to make Insights for Red Hat Enterprise Linux the management tool of choice for SAP administrators. As part of this ongoing effort, Insights for Red Hat Enterprise Linux automatically tags systems running SAP workloads and by SAP ID (SID), without any customization needed by administrators. To filter those workloads throughout the Insights for Red Hat Enterprise Linux application, use the global Filter Results drop-down menu. 7.2. Satellite host groups Satellite host groups are configured in Satellite and automatically recognized by Insights for Red Hat Enterprise Linux. 7.3. Custom system tagging You can apply custom grouping and tagging to your systems. This enables you to add contextual markers to individual systems, filter by those tags in the Insights for Red Hat Enterprise Linux application, and more easily focus on related systems. This functionality can be especially valuable when deploying Insights for Red Hat Enterprise Linux at scale, with many hundreds or thousands of systems under management. In addition to the ability to add custom tags to several Insights for Red Hat Enterprise Linux services, you can add predefined tags. The advisor service can use these tags to create targeted recommendations for your systems that might require more attention, such as those systems that require a higher level of security. 7.3.1. Filter structure Filters use a namespace=value or key=value paired structure. Namespace. The namespace is the name of the ingestion point, insights-client . This value cannot be changed. The tags.yaml file is abstracted from the namespace, which is injected by the client before upload. Key. You can create the key or use a predefined key from the system. You can use a mix of capitalization, letters, numbers, symbols and whitespace. Value. You can define your own descriptive string value. You can use a mix of capitalization, letters, numbers, symbols and whitespace. 7.3.2. Creating a custom group and the tags.yaml file To create and add tags to /etc/insights-client/tags.yaml , use insights-client with the --group=<name-you-choose> option. This command option performs the following actions: Creates the etc/insights-client/tags.yaml file Adds the group= key and <name-you-choose> value to tags.yaml Uploads a fresh archive from the system to the Insights for Red Hat Enterprise Linux application making the new tag immediately visible along with your latest results Prerequisites Root-level access to your system. Procedure Run the following command as root, adding your custom group name in place of <name-you-choose> : Optional. To add additional tags, edit the /etc/insights-client/tags.yaml file. Navigate to Inventory > Systems and log in if necessary. Click the Filter by tags drop-down menu. You can also use the search box to enter all or part of the tag's name to automatically show systems with that text in the tags. Scroll up or down the list to locate the tag. Click the tag to filter by it. Verify that your system is among the results on the advisor systems list. Navigate to Inventory > Systems and log in if necessary. Activate the Name filter and begin typing the system name until you see your system, then select it. The tag symbol is a darker color, and the number beside it shows the correct number of tags applied. 7.3.3. Editing tags.yaml to add or change tags After you create the group tag, you can edit the contents of tags.yaml to add or modify tags. The following procedure shows how to edit the /etc/insights-client/tags.yaml file, then verify the tag exists in the Red Hat Insights > RHEL > Inventory . Prerequisites Root-level access to your system. Procedure Open the tag configuration file, tags.yaml , in an editor. Edit the file contents or add additional key=value pairs. Add additional key=value pairs if needed. Use a mix of capitalization, letters, numbers, symbols, and whitespace. The following example shows how to organize tags.yaml when adding more than one tag to a system. Save your changes and close the editor. Generate an upload to Insights for Red Hat Enterprise Linux. Navigate to Inventory > Systems and log in if necessary. In the Filter Results box, click the down arrow and select one of the filters or enter the name of the filter and select it. Note You can search by the tag key or by its value. Find your system among the results. Verify that the filter icon is darkened and shows a number representing the number of filters applied to the system. 7.4. Using predefined system tags to get more accurate Red Hat Insights advisor service recommendations and enhanced security Red Hat Insights advisor service recommendations treat every system equally. However, some systems might require more security than others, or require different networking performance levels. In addition to the ability to add custom tags, Red Hat Insights for Red Hat Enterprise Linux provides predefined tags that the advisor service can use to create targeted recommendations for your systems that might require more attention. To opt in and get the extended security hardening and enhanced detection and remediation capabilities offered by predefined tags, you need to configure the tags. After configuration, the advisor service provides recommendations based on tailored severity levels, and preferred network performance that apply to your systems. To configure the tags, use the /etc/insights-client/tags.yaml file to tag systems with predefined tags in a similar way that you might use it to tag systems in the inventory service. The predefined tags are configured using the same key=value structure used to create custom tags. Details about the Red Hat-predefined tags are in the following table. Table 7.1. List of Supported Predefined Tags Key Value Note security normal (default) / strict With the normal (default) value, the advisor service compares the system's risk profile to a baseline derived from the default configuration of the most recent version of RHEL and from often-used usage patterns. This keeps recommendations focused, actionable, and low in numbers. With the strict value, the advisor service considers the system to be security-sensitive, causing specific recommendations to use a stricter baseline, potentially showing recommendations even on fresh up-to-date RHEL installations. network_performance null (default) / latency / throughput The preferred network performance (either latency or throughput according to your business requirement) would affect the severity of an advisor service recommendation to a system. Note The predefined tag keys names are reserved. If you already use the key security , with a value that differs from one of the predefined values, you will not see a change in your recommendations. You will only see a change in recommendations if your existing key=value is the same as one of the predefined keys. For example, if you have a key=value of security: high , your recommendations will not change because of the Red Hat-predefined tags. If you currently have a key=value pair of security: strict , you will see a change in the recommendations for your systems. Additional resources Using system tags to enable extended security hardening recommendations Leverage tags to make Red Hat Insights Advisor recommendations understand your environment better System tags and groups 7.4.1. Configuring predefined tags You can use the Red Hat Insights for Red Hat Enterprise Linux advisor service's predefined tags to adjust the behavior of recommendations for your systems to gain extended security hardening and enhanced detection and remediation capabilities. You can configure the predefined tags by following this procedure. Prerequisites You have root-level access to your system You have Insights client installed You have systems registered within the Insights client You have created the tags.yaml file. For information about creating the tags.yaml file, see Creating a tags.yaml file and adding a custom group . Procedure Using the command line, and your preferred editor, open /etc/insights-client/tags.yaml . (The following example uses Vim.) Edit the /etc/insights-client/tags.yaml file to add the predefined key=value pair for the tags. This example shows how to add security: strict and network_performance: latency tags. Save your changes. Close the editor. Optional: Run the insights-client command to generate an upload to Red Hat Insights for Red Hat Enterprise Linux, or wait until the scheduled Red Hat Insights upload. Confirming that predefined tags are in your production area After generating an upload to Red Hat Insights (or waiting for the scheduled Insights upload), you can find out whether the tags are in the production environment by accessing Red Hat Insights > RHEL > Inventory . Find your system and look for the newly created tags. You see a table that shows: Name Value Tag Source (for example, insights-client). The following image shows an example of what you see in inventory after creating the tag. Example of recommendations after applying a predefined tag The following image of the advisor service shows a system with the network_performance: latency tag configured. The system shows a recommendation with a higher Total Risk level of Important. The system without the network_performance: latency tag has a Total Risk of Moderate. You can make decisions about prioritizing the system with higher Total Risk.
[ "insights-client --group=<name-you-choose>", "vim /etc/insights-client/tags.yaml", "tags --- group: _group-name-value_ location: _location-name-value_ description: - RHEL8 - SAP key 4: value", "insights-client", "vi /etc/insights-client/tags.yaml", "cat /etc/insights-client/tags.yaml group: redhat location: Brisbane/Australia description: - RHEL8 - SAP security: strict network_performance: latency", "insights-client" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights_with_fedramp/system_filtering_and_groups
Chapter 9. Other notable changes
Chapter 9. Other notable changes 9.1. Javascript engine available by default on the classpath In the version, when Keycloak was used on Java 17 with Javascript providers (Script authenticator, Javascript authorization policy or Script protocol mappers for OIDC and SAML clients), it was needed to copy the javascript engine to the distribution. This is no longer needed as Nashorn javascript engine is available in Red Hat build of Keycloak server by default. When you deploy script providers, it is recommended to not copy Nashorn's script engine and its dependencies into the Red Hat build of Keycloak distribution. 9.2. Renamed Keycloak Admin client artifacts After the upgrade to Jakarta EE, artifacts for Keycloak Admin clients were renamed to more descriptive names with consideration for long-term maintainability. However, two separate Keycloak Admin clients still exist: one with Jakarta EE and the other with Java EE support. The org.keycloak:keycloak-admin-client-jakarta artifact is no longer released. The default one for the Keycloak Admin client with Jakarta EE support is org.keycloak:keycloak-admin-client (since version 26.0.0). The new artifact with Java EE support is org.keycloak:keycloak-admin-client-jee . 9.2.1. Jakarta EE support The new artifact with Java EE support is org.keycloak:keycloak-admin-client-jee . Jakarta EE support Before migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency> After migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency> 9.2.2. Java EE support Before migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency> After migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency> 9.3. Never expires option removed from client advanced settings combos The option Never expires is now removed from all the combos of the Advanced Settings client tab. This option was misleading because the different lifespans or idle timeouts were never infinite, but limited by the general user session or realm values. Therefore, this option is removed in favor of the other two remaining options: Inherits from the realm settings (the client uses general realm timeouts) and Expires in (the value is overridden for the client). Internally the Never expires was represented by -1 . Now that value is shown with a warning in the Admin Console and cannot be set directly by the administrator. 9.4. New email rules and limits validation Red Hat build of Keycloak has new rules on email creation to allow ASCII characters during the email creation. Also, a new limit of 64 characters on exists on local email part (before the @). So, a new parameter --spi-user-profile-declarative-user-profile-max-email-local-part-length is added to set max email local part length taking backwards compatibility into consideration. The default value is 64. kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100
[ "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency>", "kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/migration_guide/other-changes
13.7. Testing Early InfiniBand RDMA operation
13.7. Testing Early InfiniBand RDMA operation Note This section applies only to InfiniBand devices. Since iWARP and RoCE/IBoE devices are IP based devices, users should proceed to the section on testing RDMA operations once IPoIB has been configured and the devices have IP addresses. Once the rdma service is enabled, and the opensm service (if needed) is enabled, and the proper user-space library for the specific hardware has been installed, user space rdma operation should be possible. Simple test programs from the libibverbs-utils package are helpful in determining that RDMA operations are working properly. The ibv_devices program will show which devices are present in the system and the ibv_devinfo command will give detailed information about each device. For example: The ibv_devinfo and ibstat commands output slightly different information (such as port MTU exists in ibv_devinfo but not in ibstat output, and the Port GUID exists in ibstat output but not in ibv_devinfo output), and a few things are named differently (for example, the Base local identifier ( LID ) in ibstat output is the same as the port_lid output of ibv_devinfo ) Simple ping programs, such as ibping from the infiniband-diags package, can be used to test RDMA connectivity. The ibping program uses a client-server model. You must first start an ibping server on one machine, then run ibping as a client on another machine and tell it to connect to the ibping server. Since we are wanting to test the base RDMA capability, we need to use an RDMA specific address resolution method instead of IP addresses for specifying the server. On the server machine, the user can use the ibv_devinfo and ibstat commands to print out the port_lid (or Base lid) and the Port GUID of the port they want to test (assuming port 1 of the above interface, the port_lid / Base LID is 2 and Port GUID is 0xf4521403007bcba1 )). Then start ibping with the necessary options to bind specifically to the card and port to be tested, and also specifying ibping should run in server mode. You can see the available options to ibping by passing -? or --help , but in this instance we will need either the -S or --Server option and for binding to the specific card and port we will need either -C or --Ca and -P or --Port . Note: port in this instance does not denote a network port number, but denotes the physical port number on the card when using a multi-port card. To test connectivity to the RDMA fabric using, for example, the second port of a multi-port card, requires telling ibping to bind to port 2 on the card. When using a single port card, or testing the first port on a card, this option is not needed. For example: Then change to the client machine and run ibping . Make note of either the port GUID of the port the server ibping program is bound to, or the local identifier ( LID ) of the port the server ibping program is bound to. Also, take note which card and port in the client machine is physically connected to the same network as the card and port that was bound to on the server. For example, if the second port of the first card on the server was bound to, and that port is connected to a secondary RDMA fabric, then on the client specify whichever card and port are necessary to also be connected to that secondary fabric. Once these things are known, run the ibping program as a client and connect to the server using either the port LID or GUID that was collected on the server as the address to connect to. For example: or This outcome verifies that end to end RDMA communications are working for user space applications. The following error may be encountered: This error indicates that the necessary user-space library is not installed. The administrator will need to install one of the user-space libraries (as appropriate for their hardware) listed in section Section 13.4, "InfiniBand and RDMA related software packages" . On rare occasions, this can happen if a user installs the wrong arch type for the driver or for libibverbs . For example, if libibverbs is of arch x86_64 , and libmlx4 is installed but is of type i686 , then this error can result. Note Many sample applications prefer to use host names or addresses instead of LIDs to open communication between the server and client. For those applications, it is necessary to set up IPoIB before attempting to test end-to-end RDMA communications. The ibping application is unusual in that it will accept simple LIDs as a form of addressing, and this allows it to be a simple test that eliminates possible problems with IPoIB addressing from the test scenario and therefore gives us a more isolated view of whether or not simple RDMA communications are working.
[ "~]USD ibv_devices device node GUID ------ ---------------- mlx4_0 0002c903003178f0 mlx4_1 f4521403007bcba0 ~]USD ibv_devinfo -d mlx4_1 hca_id: mlx4_1 transport: InfiniBand (0) fw_ver: 2.30.8000 node_guid: f452:1403:007b:cba0 sys_image_guid: f452:1403:007b:cba3 vendor_id: 0x02c9 vendor_part_id: 4099 hw_ver: 0x0 board_id: MT_1090120019 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 2048 (4) sm_lid: 2 port_lid: 2 port_lmc: 0x01 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet ~]USD ibstat mlx4_1 CA 'mlx4_1' CA type: MT4099 Number of ports: 2 Firmware version: 2.30.8000 Hardware version: 0 Node GUID: 0xf4521403007bcba0 System image GUID: 0xf4521403007bcba3 Port 1: State: Active Physical state: LinkUp Rate: 56 Base lid: 2 LMC: 1 SM lid: 2 Capability mask: 0x0251486a Port GUID: 0xf4521403007bcba1 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0xf65214fffe7bcba2 Link layer: Ethernet", "~]USD ibping -S -C mlx4_1 -P 1", "~]USD ibping -c 10000 -f -C mlx4_0 -P 1 -L 2 --- rdma-host.example.com.(none) (Lid 2) ibping statistics --- 10000 packets transmitted, 10000 received, 0% packet loss, time 816 ms rtt min/avg/max = 0.032/0.081/0.446 ms", "~]USD ibping -c 10000 -f -C mlx4_0 -P 1 -G 0xf4521403007bcba1 --- rdma-host.example.com.(none) (Lid 2) ibping statistics --- 10000 packets transmitted, 10000 received, 0% packet loss, time 769 ms rtt min/avg/max = 0.027/0.076/0.278 ms", "~]USD ibv_devinfo libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 No IB devices found" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-testing_early_infiniband_rdma_operation
2.2.3.3. Edit the /var/yp/securenets File
2.2.3.3. Edit the /var/yp/securenets File If the /var/yp/securenets file is blank or does not exist (as is the case after a default installation), NIS listens to all networks. One of the first things to do is to put netmask/network pairs in the file so that ypserv only responds to requests from the appropriate network. Below is a sample entry from a /var/yp/securenets file: Warning Never start a NIS server for the first time without creating the /var/yp/securenets file. This technique does not provide protection from an IP spoofing attack, but it does at least place limits on what networks the NIS server services.
[ "255.255.255.0 192.168.0.0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_nis-edit_the_varypsecurenets_file
18.12.2. Filtering Chains
18.12.2. Filtering Chains Filtering rules are organized in filter chains. These chains can be thought of as having a tree structure with packet filtering rules as entries in individual chains (branches). Packets start their filter evaluation in the root chain and can then continue their evaluation in other chains, return from those chains back into the root chain or be dropped or accepted by a filtering rule in one of the traversed chains. Libvirt's network filtering system automatically creates individual root chains for every virtual machine's network interface on which the user chooses to activate traffic filtering. The user may write filtering rules that are either directly instantiated in the root chain or may create protocol-specific filtering chains for efficient evaluation of protocol-specific rules. The following chains exist: root mac stp (spanning tree protocol) vlan arp and rarp ipv4 ipv6 Multiple chains evaluating the mac, stp, vlan, arp, rarp, ipv4, or ipv6 protocol can be created using the protocol name only as a prefix in the chain's name. Example 18.3. ARP traffic filtering This example allows chains with names arp-xyz or arp-test to be specified and have their ARP protocol packets evaluated in those chains. The following filter XML shows an example of filtering ARP traffic in the arp chain. The consequence of putting ARP-specific rules in the arp chain, rather than for example in the root chain, is that packets protocols other than ARP do not need to be evaluated by ARP protocol-specific rules. This improves the efficiency of the traffic filtering. However, one must then pay attention to only putting filtering rules for the given protocol into the chain since other rules will not be evaluated. For example, an IPv4 rule will not be evaluated in the ARP chain since IPv4 protocol packets will not traverse the ARP chain.
[ "<filter name='no-arp-spoofing' chain='arp' priority='-500'> <uuid>f88f1932-debf-4aa1-9fbe-f10d3aa4bc95</uuid> <rule action='drop' direction='out' priority='300'> <mac match='no' srcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='350'> <arp match='no' arpsrcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='400'> <arp match='no' arpsrcipaddr='USDIP'/> </rule> <rule action='drop' direction='in' priority='450'> <arp opcode='Reply'/> <arp match='no' arpdstmacaddr='USDMAC'/> </rule> <rule action='drop' direction='in' priority='500'> <arp match='no' arpdstipaddr='USDIP'/> </rule> <rule action='accept' direction='inout' priority='600'> <arp opcode='Request'/> </rule> <rule action='accept' direction='inout' priority='650'> <arp opcode='Reply'/> </rule> <rule action='drop' direction='inout' priority='1000'/> </filter>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-filt-chain
Chapter 11. Disabling Windows container workloads
Chapter 11. Disabling Windows container workloads You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO. 11.1. Uninstalling the Windows Machine Config Operator You can uninstall the Windows Machine Config Operator (WMCO) from your cluster. Prerequisites Delete the Windows Machine objects hosting your Windows workloads. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat Windows Machine Config Operator . Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed. In the Windows Machine Config Operator descriptor page, click Uninstall . 11.2. Deleting the Windows Machine Config Operator namespace You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default. Prerequisites The WMCO is removed from your cluster. Procedure Remove all Windows workloads that were created in the openshift-windows-machine-config-operator namespace: USD oc delete --all pods --namespace=openshift-windows-machine-config-operator Verify that all pods in the openshift-windows-machine-config-operator namespace are deleted or are reporting a terminating state: USD oc get pods --namespace openshift-windows-machine-config-operator Delete the openshift-windows-machine-config-operator namespace: USD oc delete namespace openshift-windows-machine-config-operator Additional resources Deleting Operators from a cluster Removing Windows nodes
[ "oc delete --all pods --namespace=openshift-windows-machine-config-operator", "oc get pods --namespace openshift-windows-machine-config-operator", "oc delete namespace openshift-windows-machine-config-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/windows_container_support_for_openshift/disabling-windows-container-workloads
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/installing_red_hat_3scale_api_management/proc-providing-feedback-on-redhat-documentation
Deploying installer-provisioned clusters on bare metal
Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.12 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/deploying_installer-provisioned_clusters_on_bare_metal/index
Chapter 3. Understanding Windows container workloads
Chapter 3. Understanding Windows container workloads Red Hat OpenShift support for Windows Containers provides built-in support for running Microsoft Windows Server containers on OpenShift Container Platform. For those that administer heterogeneous environments with a mix of Linux and Windows workloads, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). Note Multi-tenancy for clusters that have Windows nodes is not supported. Hostile multi-tenant usage introduces security concerns in all Kubernetes environments. Additional security features like pod security policies , or more fine-grained role-based access control (RBAC) for nodes, make exploits more difficult. However, if you choose to run hostile multi-tenant workloads, a hypervisor is the only security option you should use. The security domain for Kubernetes encompasses the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters. Windows Server Containers provide resource isolation using a shared kernel but are not intended to be used in hostile multitenancy scenarios. Scenarios that involve hostile multitenancy should use Hyper-V Isolated Containers to strongly isolate tenants. 3.1. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. Important Because Microsoft has stopped publishing Windows Server 2019 images with Docker , Red Hat no longer supports Windows Azure for WMCO releases earlier than version 6.0.0. For WMCO 5.y.z and earlier, Windows Server 2019 images must have Docker pre-installed. WMCO 6.0.0 and later uses containerd as the runtime. You can upgrade to OpenShift Container Platform 4.11, which uses WMCO 6.0.0. 3.1.1. WMCO 5.1.x supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 5.1.1 and 5.1.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2019 (version 1809) Microsoft Azure Windows Server 2019 (version 1809) VMware vSphere Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2022 (OS Build 20348.681 or later). Bare metal or provider agnostic Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later. Windows Server 2019 (version 1809) 3.1.2. WMCO 5.0.0 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 5.0.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only the appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2019 (version 1809) VMware vSphere Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later. Bare metal or provider agnostic Windows Server 2019 (version 1809) 3.1.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Be aware that OpenShift SDN networking is the default network for OpenShift Container Platform clusters. However, OpenShift SDN is not supported by WMCO. Table 3.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port bare metal Hybrid networking with OVN-Kubernetes Table 3.2. WMCO 5.1.0 Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2019 (version 1809) Custom VXLAN port Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later Table 3.3. WMCO 5.0.0 Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2019 (version 1809) Custom VXLAN port Windows Server 2022 Long-Term Servicing Channel (LTSC). OS Build 20348.681 or later Additional resources See Configuring hybrid networking with OVN-Kubernetes 3.2. Windows workload management To run Windows workloads in your cluster, you must first install the Windows Machine Config Operator (WMCO). The WMCO is a Linux-based Operator that runs on Linux-based control plane and compute nodes. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Figure 3.1. WMCO design Before deploying Windows workloads, you must create a Windows compute node and have it join the cluster. The Windows node hosts the Windows workloads in a cluster, and can run alongside other Linux-based compute nodes. You can create a Windows compute node by creating a Windows machine set to host Windows Server compute machines. You must apply a Windows-specific label to the machine set that specifies a Windows OS image that has the Docker-formatted container runtime add-on enabled. Important Currently, the Docker-formatted container runtime is used in Windows nodes. Kubernetes is deprecating Docker as a container runtime; you can reference the Kubernetes documentation for more information in Docker deprecation . Containerd will be the new supported container runtime for Windows nodes in a future release of Kubernetes. The WMCO watches for machines with the Windows label. After a Windows machine set is detected and its respective machines are provisioned, the WMCO configures the underlying Windows virtual machine (VM) so that it can join the cluster as a compute node. Figure 3.2. Mixed Windows and Linux workloads The WMCO expects a predetermined secret in its namespace containing a private key that is used to interact with the Windows instance. WMCO checks for this secret during boot up time and creates a user data secret which you must reference in the Windows MachineSet object that you created. Then the WMCO populates the user data secret with a public key that corresponds to the private key. With this data in place, the cluster can connect to the Windows VM using an SSH connection. After the cluster establishes a connection with the Windows VM, you can manage the Windows node using similar practices as you would a Linux-based node. Note The OpenShift Container Platform web console provides most of the same monitoring capabilities for Windows nodes that are available for Linux nodes. However, the ability to monitor workload graphs for pods running on Windows nodes is not available at this time. Scheduling Windows workloads to a Windows node can be done with typical pod scheduling practices like taints, tolerations, and node selectors; alternatively, you can differentiate your Windows workloads from Linux workloads and other Windows-versioned workloads by using a RuntimeClass object. 3.3. Windows node services The following Windows-specific services are installed on each Windows node: Service Description kubelet Registers the Windows node and manages its status. Container Network Interface (CNI) plugins Exposes networking for Windows nodes. Windows Machine Config Bootstrapper (WMCB) Configures the kubelet and CNI plugins. Windows Exporter Exports Prometheus metrics from Windows nodes Kubernetes Cloud Controller Manager (CCM) Interacts with the underlying Azure cloud platform. hybrid-overlay Creates the OpenShift Container Platform Host Network Service (HNS) . kube-proxy Maintains network rules on nodes allowing outside communication. 3.4. Known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat cost management Red Hat OpenShift Local Windows nodes do not support pulling container images from private registries. You can use images from public registries or pre-pull the images. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Pod termination grace periods require the containerd container runtime to be installed on the Windows node. Kubernetes has identified several API compatibility issues .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/windows_container_support_for_openshift/understanding-windows-container-workloads
5.5. The Multipath Daemon
5.5. The Multipath Daemon If you find you have trouble implementing a multipath configuration, you should ensure that the multipath daemon is running, as described in Chapter 3, Setting Up DM Multipath . The multipathd daemon must be running in order to use multipathed devices.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/multipath_daemon
Appendix A. Additional procedures
Appendix A. Additional procedures A.1. Creating bootable media The P2V Client can be booted from PXE boot, a bootable USB device, or optical media. Scripts for preparing boot options are included with the rhel-6.x-p2v.iso ISO in the LiveOS directory. A.1.1. Create a P2V client boot CD The exact series of steps that produces a CD from an image file varies greatly from computer to computer, depending on the operating system and disc burning software installed. This procedure describes burning an ISO image to disk using Brasero which is included in Red Hat Enterprise Linux 6. Make sure that your disc burning software is capable of burning discs from image files. Although this is true of most disc burning software, exceptions exist. Insert a blank, writable CD into your computer's CD or DVD burner. Open the Applications menu, choose the Sound and Video sub-menu, and click Brasero Disk Burner . Click the Burn Image button. Click the Click here to select a disc image button. Browse to the rhel-6.x-p2v.iso and select it for burning. Click Burn . Your BIOS may need to be changed to allow booting from your DVD/CD-ROM drive. A.1.2. Create a bootable P2V USB media As root, mount the rhel-6.x-p2v.iso : Attach your USB device to the computer. For the livecd-iso-to-disk script to function, the USB filesystem must be formatted vfat, ext[234] or btrfs. From a terminal as root run the livecd-iso-to-disk script: When the script finishes successfully, eject the USB device. A.1.3. Create a PXE boot image As root, mount the rhel-6.x-p2v.iso From a terminal as root run the livecd-iso-to-pxeboot script: When the command successfully completes, there is a tftpboot directory in the directory from which the command was run. Rename the newly created tftpboot directory to a more descriptive name: Copy the p2vboot/ sub-directory to the /tftpboot directory: Set up your DHCP, TFTP and PXE server to serve /tftpboot/p2vboot/pxeboot.0 . Note The initrd image contains the whole CD ISO. You will notice when pxebooting that initrd can take a long time to download. This is normal behavior.
[ "mkdir /mnt/p2vmount", "mount -o loop rhel-6.x-p2v.iso /mnt/p2vmount", "bash /mnt/p2vmount/LiveOS/livecd-iso-to-disk /PATH/TO/rhel-6.x-p2v.iso /dev/YOURUSBDEVICE", "mkdir /mnt/p2vmount", "mount -o loop rhel-6.x-p2v.iso /mnt/p2vmount", "bash /mnt/p2vboot/LiveOS/livecd-iso-to-pxeboot /PATH/TO/rhel-6.x-p2v.iso", "mv tftpboot/ p2vboot/", "cp -R p2vboot/ /tftpboot/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/Appendix_Additional_Procedures
Chapter 23. Deploying Insights in Red Hat Virtualization Manager
Chapter 23. Deploying Insights in Red Hat Virtualization Manager To deploy Red Hat Insights on an existing Red Hat Enterprise Linux (RHEL) system with Red Hat Virtualization Manager installed, complete these tasks: Register the system to the Red Hat Insights application. Enable data collection from the Red Hat Virtualization environment. Register the system to Red Hat Insights Register the system to communicate with the Red Hat Insights service and to view results displayed in the Red Hat Insights console. Enable data collection from the Red Hat Virtualization environment Modify the /etc/ovirt-engine/rhv-log-collector-analyzer/rhv-log-collector-analyzer.conf file to include the following line: View your Insights results in the Insights Console System and infrastructure results can be viewed in the Insights console . The Overview tab provides a dashboard view of current risks to your infrastructure. From this starting point, you can investigate how a specific rule is affecting your system, or take a system-based approach to view all the rule matches that pose a risk to the system. Select Rule hits by severity to view rules by the Total Risk they pose to your infrastructure ( Critical , Important , Moderate , or Low ). Or Select Rule hits by category to see the type of risk they pose to your infrastructure ( Availability , Stability , Performance , or Security ). Search for a specific rule by name, or scroll through the list of rules to see high-level information about risk, systems exposed, and availability of Ansible Playbook to automate remediation. Click on a rule to see a description of the rule, learn more from relevant knowledge base articles, and view a list of systems that are affected. Click on a system to see specific information about detected issues and steps to resolve the issue.
[ "insights-client --register", "upload-json=True" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-deploying_insights_rhvm
4.7. Security Enhanced Communication Tools
4.7. Security Enhanced Communication Tools As the size and popularity of the Internet has grown, so has the threat of communication interception. Over the years, tools have been developed to encrypt communications as they are transferred over the network. Red Hat Enterprise Linux ships with two basic tools that use high-level, public-key-cryptography-based encryption algorithms to protect information as it travels over the network. OpenSSH - A free implementation of the SSH protocol for encrypting network communication. Gnu Privacy Guard (GPG) - A free implementation of the PGP (Pretty Good Privacy) encryption application for encrypting data. OpenSSH is a safer way to access a remote machine and replaces older, unencrypted services like telnet and rsh . OpenSSH includes a network service called sshd and three command line client applications: ssh - A secure remote console access client. scp - A secure remote copy command. sftp - A secure pseudo-ftp client that allows interactive file transfer sessions. It is highly recommended that any remote communication with Linux systems occur using the SSH protocol. For more information about OpenSSH, refer to the chapter titled OpenSSH in the System Administrators Guide . For more information about the SSH Protocol, refer to the chapter titled SSH Protocol in the Reference Guide . Important Although the sshd service is inherently secure, the service must be kept up-to-date to prevent security threats. Refer to Chapter 3, Security Updates for more information about this issue. GPG is one way to ensure private email communication. It can be used both to email sensitive data over public networks and to protect sensitive data on hard drives.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-wstation-sec-tools
Updating OpenShift Data Foundation
Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> -> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> -> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> -> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. In OpenShift Data Foundation clusters with disaster recovery (DR) enabled, during upgrade to version 4.18, bluestore-rdr OSDs are migrated to bluestore OSDs. bluestore backed OSDs now provide the same improved performance of bluestore-rdr based OSDs, which is important when the cluster is required to be used for Regional Disaster Recovery. During upgrade you can view the status of the OSD migration. In the OpenShift Web Console, navigate to Storage -> Data Foundation -> Storage System . In the Activity card of the Block and File tab you can view ongoing activities. Migrating cluster OSDs shows the status of the migration from bluestore-rdr to bluestore . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . Chapter 2. OpenShift Data Foundation upgrade channels and releases In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation. For example, OpenShift Data Foundation 4.18 upgrade channels recommend upgrades within 4.18. Upgrades to future releases is not recommended. This strategy ensures that administrators can explicitly decide to upgrade to the minor version of OpenShift Data Foundation. Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. By default, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So, on OpenShift Container Platform 4.18, OpenShift Data Foundation 4.18 will be the latest version which can be installed. OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.18, OpenShift Container Platform 4.18 and 4.18 (when generally available) are supported. OpenShift Container Platform 4.18 is supported to maintain forward compatibility of OpenShift Data Foundation with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release. Important Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.15 to 4.17 and then to 4.18. You cannot update from OpenShift Container Platform 4.17 to 4.18 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation. OpenShift Data Foundation 4.18 offers the following upgrade channel: stable-4.18 stable-4.17 stable-4.18 channel Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.18 channel to upgrade from OpenShift Data Foundation 4.17 and upgrades within 4.18. stable-4.17 You can use the stable-4.17 channel to upgrade from OpenShift Data Foundation 4.15 and upgrades within 4.17. Chapter 3. Updating Red Hat OpenShift Data Foundation 4.17 to 4.18 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.18 directly from any version older than 4.17 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.18.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Optional: To reduce the upgrade time for large clusters that are using CSI plugins, make sure to tune the following parameters in the rook-ceph-operator-config configmap to a higher count or percentage. CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE Note By default, the rook-ceph-operator-config configmap is empty and you need to add the data key. This affects CephFS and CephRBD daemonsets and allows the pods to restart simultaneously or be unavailable and reduce the upgrade time. For an optimal value, you can set the parameter values to 20%. However, if the value is too high, disruption for new volumes might be observed during the upgrade. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.18 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators -> Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . Chapter 4. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators -> Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap. Important Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method. CSV ConfigMap Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. The updated permissions for the user are set as: Run the previously downloaded python script using one of the following options based on the method you used during deployment, either a configuration file or command-line flags. Configuration file Create a config.ini file that includes all of the parameters used during initial deployment. Run the following command to get the configmap output which contains those parameters: Add the parameters from the output to the config.ini file. You can add additional parameters to the config.ini file to those used during deployment. See Table 6.1, "Mandatory and optional parameters used during upgrade" for descriptions of the parameters. Example config.ini file: Run the python script: Replace <config-file> with the path to the config.ini file. Command-line flags Run the previously downloaded python script and pass the parameters for your deployment. Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. You can also add additional flags to those used during deployment. See Table 6.1, "Mandatory and optional parameters used during upgrade" for descriptions of the parameters. Table 6.1. Mandatory and optional parameters used during upgrade Parameter Description rbd-data-pool-name (Mandatory) Used for providing block storage in OpenShift Data Foundation. rgw-endpoint (Optional) Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . monitoring-endpoint (Optional) Accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. monitoring-endpoint-port (Optional) The port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. run-as-user (Mandatory) The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads -> Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. If verification steps fail, contact Red Hat Support .
[ "oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here", "oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py", "oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade", "client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs = client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd", "oc get configmap -namespace openshift-storage external-cluster-user-command --output jsonpath='{.data.args}'", "[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name>", "python3 ceph-external-cluster-details-exporter.py --config-file <config-file>", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name _<rbd block pool name>_ --monitoring-endpoint _<ceph mgr prometheus exporter endpoint>_ --monitoring-endpoint-port _<ceph mgr prometheus exporter port>_ --rgw-endpoint _<rgw endpoint>_ --run-as-user _<ocs_client_name>_ [optional arguments]", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/updating_openshift_data_foundation/index
32.2. Configuring the kdump Service
32.2. Configuring the kdump Service There are three common means of configuring the kdump service: at the first boot, using the Kernel Dump Configuration graphical utility, and doing so manually on the command line. Important A limitation in the current implementation of the Intel IOMMU driver can occasionally prevent the kdump service from capturing the core dump image. To use kdump on Intel architectures reliably, it is advised that the IOMMU support is disabled. Warning It is known that the kdump service does not work reliably on certain combinations of HP Smart Array devices and system boards from the same vendor. Consequent to this, users are strongly advised to test the configuration before using it in production environment, and if necessary, configure kdump to store the kernel crash dump to a remote machine over a network. For more information on how to test the kdump configuration, see Section 32.2.4, "Testing the Configuration" . 32.2.1. Configuring kdump at First Boot When the system boots for the first time, the firstboot application is launched to guide the user through the initial configuration of the freshly installed system. To configure kdump , navigate to the Kdump section and follow the instructions below. Select the Enable kdump? check box to allow the kdump daemon to start at boot time. This will enable the service for runlevels 2 , 3 , 4 , and 5 , and start it for the current session. Similarly, unselecting the check box will disable it for all runlevels and stop the service immediately. Click the up and down arrow buttons to the Kdump Memory field to increase or decrease the value to configure the amount of memory that is reserved for the kdump kernel. Notice that the Usable System Memory field changes accordingly showing you the remaining memory that will be available to the system. Important This section is available only if the system has enough memory. To learn about minimum memory requirements of the Red Hat Enterprise Linux 6 system, read the Required minimums section of the Red Hat Enterprise Linux Technology Capabilities and Limits comparison chart. When the kdump crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory). The memory can be attempted up to the maximum of 896 MB if required. This is recommended especially in large environments, for example in systems with a large number of Logical Unit Numbers (LUNs).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-kdump-configuration
23.2. Operating System Booting
23.2. Operating System Booting There are a number of different ways to boot virtual machines, including BIOS boot loader, host physical machine boot loader, direct kernel boot, and container boot. 23.2.1. BIOS Boot Loader Booting the BIOS is available for hypervisors supporting full virtualization. In this case, the BIOS has a boot order priority (floppy, hard disk, CD-ROM, network) determining where to locate the boot image. The <os> section of the domain XML contains the following information: ... <os> <type>hvm</type> <boot dev='fd'/> <boot dev='hd'/> <boot dev='cdrom'/> <boot dev='network'/> <bootmenu enable='yes'/> <smbios mode='sysinfo'/> <bios useserial='yes' rebootTimeout='0'/> </os> ... Figure 23.2. BIOS boot loader domain XML Important Instead of using the <boot dev/> configuration for determining boot device order, Red Hat recommends using the <boot order/> configuration. For an example, see Specifying boot order . The components of this section of the domain XML are as follows: Table 23.2. BIOS boot loader elements Element Description <type> Specifies the type of operating system to be booted on the guest virtual machine. hvm indicates that the operating system is designed to run on bare metal and requires full virtualization. linux refers to an operating system that supports the KVM hypervisor guest ABI. There are also two optional attributes: arch specifies the CPU architecture to virtualization, and machine refers to the machine type. For more information, see the libvirt upstream documentation . <boot> Specifies the boot device to consider with one of the following values: fd , hd , cdrom or network . The boot element can be repeated multiple times to set up a priority list of boot devices to try in turn. Multiple devices of the same type are sorted according to their targets while preserving the order of buses. After defining the domain, its XML configuration returned by libvirt lists devices in the sorted order. Once sorted, the first device is marked as bootable. For more information, see the libvirt upstream documentation . <bootmenu> Determines whether or not to enable an interactive boot menu prompt on guest virtual machine start up. The enable attribute can be either yes or no . If not specified, the hypervisor default is used. <smbios> determines how SMBIOS information is made visible in the guest virtual machine. The mode attribute must be specified, as either emulate (allows the hypervisor generate all values), host (copies all of Block 0 and Block 1, except for the UUID, from the host physical machine's SMBIOS values; the virConnectGetSysinfo call can be used to see what values are copied), or sysinfo (uses the values in the sysinfo element). If not specified, the hypervisor's default setting is used. <bios> This element has attribute useserial with possible values yes or no . The attribute enables or disables the Serial Graphics Adapter, which enables users to see BIOS messages on a serial port. Therefore, one needs to have serial port defined. The rebootTimeout attribute controls whether and after how long the guest virtual machine should start booting again in case the boot fails (according to the BIOS). The value is set in milliseconds with a maximum of 65535 ; setting -1 disables the reboot. 23.2.2. Direct Kernel Boot When installing a new guest virtual machine operating system, it is often useful to boot directly from a kernel and initrd stored in the host physical machine operating system, allowing command-line arguments to be passed directly to the installer. This capability is usually available for both fully virtualized and paravirtualized guest virtual machines. ... <os> <type>hvm</type> <kernel>/root/f8-i386-vmlinuz</kernel> <initrd>/root/f8-i386-initrd</initrd> <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline> <dtb>/root/ppc.dtb</dtb> </os> ... Figure 23.3. Direct kernel boot The components of this section of the domain XML are as follows: Table 23.3. Direct kernel boot elements Element Description <type> Same as described in the BIOS boot section. <kernel> Specifies the fully-qualified path to the kernel image in the host physical machine operating system. <initrd> Specifies the fully-qualified path to the (optional) ramdisk image in the host physical machine operating system. <cmdline> Specifies arguments to be passed to the kernel (or installer) at boot time. This is often used to specify an alternate primary console (such as a serial port), or the installation media source or kickstart file. 23.2.3. Container Boot When booting a domain using container-based virtualization, instead of a kernel or boot image, a path to the init binary is required, using the init element. By default, this will be launched with no arguments. To specify the initial argv , use the initarg element, repeated as many times as required. The cmdline element provides an equivalent to /proc/cmdline but will not affect <initarg> . ... <os> <type arch='x86_64'>exe</type> <init>/bin/systemd</init> <initarg>--unit</initarg> <initarg>emergency.service</initarg> </os> ... Figure 23.4. Container boot
[ "<os> <type>hvm</type> <boot dev='fd'/> <boot dev='hd'/> <boot dev='cdrom'/> <boot dev='network'/> <bootmenu enable='yes'/> <smbios mode='sysinfo'/> <bios useserial='yes' rebootTimeout='0'/> </os>", "<os> <type>hvm</type> <kernel>/root/f8-i386-vmlinuz</kernel> <initrd>/root/f8-i386-initrd</initrd> <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline> <dtb>/root/ppc.dtb</dtb> </os>", "<os> <type arch='x86_64'>exe</type> <init>/bin/systemd</init> <initarg>--unit</initarg> <initarg>emergency.service</initarg> </os>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Operating_system_booting
B.39.2. RHSA-2010:0925 - Important: krb5 security and bug fix update
B.39.2. RHSA-2010:0925 - Important: krb5 security and bug fix update Updated krb5 packages that fix multiple security issues and one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Kerberos is a network authentication system which allows clients and servers to authenticate to each other using symmetric encryption and a trusted third party, the Key Distribution Center (KDC). CVE-2010-1323 , CVE-2010-1324 , CVE-2010-4020 Multiple checksum validation flaws were discovered in the MIT Kerberos implementation. A remote attacker could use these flaws to tamper with certain Kerberos protocol packets and, possibly, bypass authentication or authorization mechanisms and escalate their privileges. Red Hat would like to thank the MIT Kerberos Team for reporting these issues. Bug Fix BZ# 644825 When attempting to perform PKINIT pre-authentication, if the client had more than one possible candidate certificate the client could fail to select the certificate and key to use. This usually occurred if certificate selection was configured to use the value of the keyUsage extension, or if any of the candidate certificates did not contain a subjectAltName extension. Consequently, the client attempted to perform pre-authentication using a different (usually password-based) mechanism. All krb5 users should upgrade to these updated packages, which contain backported patches to correct these issues. After installing the updated packages, the krb5kdc daemon will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2010-0925
Chapter 1. Prerequisites checklist for deploying ROSA using STS
Chapter 1. Prerequisites checklist for deploying ROSA using STS This is a high level checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS (ROSA) (classic architecture) cluster with STS . The machine that you run the installation process from must have access to the following: Amazon Web Services API and authentication service endpoints Red Hat OpenShift API and authentication service endpoints ( api.openshift.com and sso.redhat.com ) Internet connectivity to obtain installation artifacts Important Starting with version 1.2.7 of the ROSA CLI, all OIDC provider endpoint URLs on new clusters use Amazon CloudFront and the oidc.op1.openshiftapps.com domain. This change improves access speed, reduces latency, and improves resiliency for new clusters created with the ROSA CLI 1.2.7 or later. There are no supported migration paths for existing OIDC provider configurations. 1.1. Accounts and permissions Ensure that you have the following accounts, credentials, and permissions. 1.1.1. AWS account Create an AWS account if you do not already have one. Gather the credentials required to log in to your AWS account. Ensure that your AWS account has sufficient permissions to use the ROSA CLI: Least privilege permissions for common ROSA CLI commands Enable ROSA for your AWS account on the AWS console . If your account is the management account for your organization (used for AWS billing purposes), you must have aws-marketplace:Subscribe permissions available on your account. See Service control policy (SCP) prerequisites for more information, or see the AWS documentation for troubleshooting: AWS Organizations service control policy denies required AWS Marketplace permissions . 1.1.2. Red Hat account Create a Red Hat account for the Red Hat Hybrid Cloud Console if you do not already have one. Gather the credentials required to log in to your Red Hat account. 1.2. CLI requirements You need to download and install several CLI (command line interface) tools to be able to deploy a cluster. 1.2.1. AWS CLI ( aws ) Install the AWS Command Line Interface . Log in to your AWS account using the AWS CLI: Sign in through the AWS CLI Verify your account identity: USD aws sts get-caller-identity Check whether the service role for ELB (Elastic Load Balancing) exists: USD aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" If the role does not exist, create it by running the following command: USD aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com" 1.2.2. ROSA CLI ( rosa ) Install the ROSA CLI from the web console . See Installing the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa for detailed instructions. Log in to your Red Hat account by running rosa login and following the instructions in the command output: USD rosa login To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: Alternatively, you can copy the full USD rosa login --token=abc... command and paste that in the terminal: USD rosa login --token=<abc..> Confirm you are logged in using the correct account and credentials: USD rosa whoami 1.2.3. OpenShift CLI ( oc ) The OpenShift CLI ( oc ) is not required to deploy a Red Hat OpenShift Service on AWS cluster, but is a useful tool for interacting with your cluster after it is deployed. Download and install`oc` from the OpenShift Cluster Manager Command-line interface (CLI) tools page, or follow the instructions in Getting started with the OpenShift CLI . Verify that the OpenShift CLI has been installed correctly by running the following command: USD rosa verify openshift-client 1.3. AWS infrastructure prerequisites Optionally, ensure that your AWS account has sufficient quota available to deploy a cluster. USD rosa verify quota This command only checks the total quota allocated to your account; it does not reflect the amount of quota already consumed from that quota. Running this command is optional because your quota is verified during cluster deployment. However, Red Hat recommends running this command to confirm your quota ahead of time so that deployment is not interrupted by issues with quota availability. For more information about resources provisioned during ROSA cluster deployment, see Provisioned AWS Infrastructure . For more information about the required AWS service quotas, see Required AWS service quotas . 1.4. Service Control Policy (SCP) prerequisites ROSA clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access. Ensure that your organization's SCPs are not more restrictive than the roles and policies required by the cluster. For more information, see the Minimum set of effective permissions for SCPs . When you create a ROSA cluster, an associated AWS OpenID Connect (OIDC) identity provider is created. 1.5. Networking prerequisites Prerequisites needed from a networking standpoint. 1.5.1. Minimum bandwidth During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster resources and public internet resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails. After deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades. 1.5.2. Firewall Configure your firewall to allow access to the domains and ports listed in AWS firewall prerequisites . 1.6. VPC requirements for PrivateLink clusters If you choose to deploy a PrivateLink cluster, then be sure to deploy the cluster in the pre-existing BYO VPC: Create a public and private subnet for each AZ that your cluster uses. Alternatively, implement transit gateway for internet and egress with appropriate routes. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. Set both enableDnsHostnames and enableDnsSupport to true . That way, the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster internal DNS records. Verify route tables by running: ---- USD aws ec2 describe-route-tables --filters "Name=vpc-id,Values=<vpc-id>" ---- Ensure that the cluster can egress either through NAT gateway in public subnet or through transit gateway. Ensure whatever UDR you would like to follow is set up. You can also configure a cluster-wide proxy during or after install. Configuring a cluster-wide proxy for more details. Note You can install a non-PrivateLink ROSA cluster in a pre-existing BYO VPC. 1.6.1. Additional custom security groups During cluster creation, you can add additional custom security groups to a cluster that has an existing non-managed VPC. To do so, complete these prerequisites before you create the cluster: Create the custom security groups in AWS before you create the cluster. Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC. You may need to request additional AWS quota for Security groups per network interface . For more details see the detailed requirements for Security groups . 1.6.2. Custom DNS and domains You can configure a custom domain name server and custom domain name for your cluster. To do so, complete the following prerequisites before you create the cluster: By default, ROSA clusters require you to set the domain name servers option to AmazonProvidedDNS to ensure successful cluster creation and operation. To use a custom DNS server and domain name for your cluster, the ROSA installer must be able to use VPC DNS with default DHCP options so that it can resolve internal IPs and services. This means that you must create a custom DHCP option set to forward DNS lookups to your DNS server, and associate this option set with your VPC before you create the cluster. For more information, see Deploying ROSA with a custom DNS resolver . Confirm that your VPC is using VPC Resolver by running the following command: USD aws ec2 describe-dhcp-options
[ "aws sts get-caller-identity", "aws iam get-role --role-name \"AWSServiceRoleForElasticLoadBalancing\"", "aws iam create-service-linked-role --aws-service-name \"elasticloadbalancing.amazonaws.com\"", "rosa login To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:", "rosa login --token=<abc..>", "rosa whoami", "rosa verify openshift-client", "rosa verify quota", "---- USD aws ec2 describe-route-tables --filters \"Name=vpc-id,Values=<vpc-id>\" ----", "aws ec2 describe-dhcp-options" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/prepare_your_environment/prerequisites-checklist-for-deploying-rosa-using-sts
9.3. libvirt NUMA Tuning
9.3. libvirt NUMA Tuning Generally, best performance on NUMA systems is achieved by limiting guest size to the amount of resources on a single NUMA node. Avoid unnecessarily splitting resources across NUMA nodes. Use the numastat tool to view per-NUMA-node memory statistics for processes and the operating system. In the following example, the numastat tool shows four virtual machines with suboptimal memory alignment across NUMA nodes: Run numad to align the guests' CPUs and memory resources automatically. Then run numastat -c qemu-kvm again to view the results of running numad . The following output shows that resources have been aligned: Note Running numastat with -c provides compact output; adding the -m option adds system-wide memory information on a per-node basis to the output. See the numastat man page for more information. 9.3.1. Monitoring Memory per host NUMA Node You can use the nodestats.py script to report the total memory and free memory for each NUMA node on a host. This script also reports how much memory is strictly bound to certain host nodes for each running domain. For example: This example shows four host NUMA nodes, each containing approximately 4GB of RAM in total ( MemTotal ). Nearly all memory is consumed on each domain ( MemFree ). There are four domains (virtual machines) running: domain 'rhel7-0' has 1.5GB memory which is not pinned onto any specific host NUMA node. Domain 'rhel7-2' however, has 4GB memory and 4 NUMA nodes which are pinned 1:1 to host nodes. To print host NUMA node statistics, create a nodestats.py script for your environment. An example script can be found the libvirt-python package files in /usr/share/doc/libvirt-python- version /examples/nodestats.py . The specific path to the script can be displayed by using the rpm -ql libvirt-python command. 9.3.2. NUMA vCPU Pinning vCPU pinning provides similar advantages to task pinning on bare metal systems. Since vCPUs run as user-space tasks on the host operating system, pinning increases cache efficiency. One example of this is an environment where all vCPU threads are running on the same physical socket, therefore sharing a L3 cache domain. Note In Red Hat Enterprise Linux versions 7.0 to 7.2, it is only possible to pin active vCPUs. However, with Red Hat Enterprise Linux 7.3, pinning inactive vCPUs is available as well. Combining vCPU pinning with numatune can avoid NUMA misses. The performance impacts of NUMA misses are significant, generally starting at a 10% performance hit or higher. vCPU pinning and numatune should be configured together. If the virtual machine is performing storage or network I/O tasks, it can be beneficial to pin all vCPUs and memory to the same physical socket that is physically connected to the I/O adapter. Note The lstopo tool can be used to visualize NUMA topology. It can also help verify that vCPUs are binding to cores on the same physical socket. See the following Knowledgebase article for more information on lstopo : https://access.redhat.com/site/solutions/62879 . Important Pinning causes increased complexity where there are many more vCPUs than physical cores. The following example XML configuration has a domain process pinned to physical CPUs 0-7. The vCPU thread is pinned to its own cpuset. For example, vCPU0 is pinned to physical CPU 0, vCPU1 is pinned to physical CPU 1, and so on: <vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune> There is a direct relationship between the vcpu and vcpupin tags. If a vcpupin option is not specified, the value will be automatically determined and inherited from the parent vcpu tag option. The following configuration shows <vcpupin> for vcpu 5 missing. Hence, vCPU5 would be pinned to physical CPUs 0-7, as specified in the parent tag <vcpu> : <vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune> Important <vcpupin> , <numatune> , and <emulatorpin> should be configured together to achieve optimal, deterministic performance. For more information on the <numatune> tag, see Section 9.3.3, "Domain Processes" . For more information on the <emulatorpin> tag, see Section 9.3.6, "Using emulatorpin" . 9.3.3. Domain Processes As provided in Red Hat Enterprise Linux, libvirt uses libnuma to set memory binding policies for domain processes. The nodeset for these policies can be configured either as static (specified in the domain XML) or auto (configured by querying numad). See the following XML configuration for examples on how to configure these inside the <numatune> tag: <numatune> <memory mode='strict' placement=' auto '/> </numatune> <numatune> <memory mode='strict' nodeset=' 0,2-3 '/> </numatune> libvirt uses sched_setaffinity(2) to set CPU binding policies for domain processes. The cpuset option can either be static (specified in the domain XML) or auto (configured by querying numad). See the following XML configuration for examples on how to configure these inside the <vcpu> tag: <vcpu placement=' auto '>8</vcpu> <vcpu placement=' static ' cpuset='0-10,^5'>8</vcpu> There are implicit inheritance rules between the placement mode you use for <vcpu> and <numatune> : The placement mode for <numatune> defaults to the same placement mode of <vcpu> , or to static if a <nodeset> is specified. Similarly, the placement mode for <vcpu> defaults to the same placement mode of <numatune> , or to static if <cpuset> is specified. This means that CPU tuning and memory tuning for domain processes can be specified and defined separately, but they can also be configured to be dependent on the other's placement mode. It is also possible to configure your system with numad to boot a selected number of vCPUs without pinning all vCPUs at startup. For example, to enable only 8 vCPUs at boot on a system with 32 vCPUs, configure the XML similar to the following: <vcpu placement=' auto ' current='8'>32</vcpu> Note See the following URLs for more information on vcpu and numatune: http://libvirt.org/formatdomain.html#elementsCPUAllocation and http://libvirt.org/formatdomain.html#elementsNUMATuning 9.3.4. Domain vCPU Threads In addition to tuning domain processes, libvirt also permits the setting of the pinning policy for each vcpu thread in the XML configuration. Set the pinning policy for each vcpu thread inside the <cputune> tags: <cputune> <vcpupin vcpu="0" cpuset="1-4,^2"/> <vcpupin vcpu="1" cpuset="0,1"/> <vcpupin vcpu="2" cpuset="2,3"/> <vcpupin vcpu="3" cpuset="0,4"/> </cputune> In this tag, libvirt uses either cgroup or sched_setaffinity(2) to pin the vcpu thread to the specified cpuset. Note For more details on <cputune> , see the following URL: http://libvirt.org/formatdomain.html#elementsCPUTuning In addition, if you need to set up a virtual machines with more vCPU than a single NUMA node, configure the host so that the guest detects a NUMA topology on the host. This allows for 1:1 mapping of CPUs, memory, and NUMA nodes. For example, this can be applied with a guest with 4 vCPUs and 6 GB memory, and a host with the following NUMA settings: In this scenario, use the following Domain XML setting: <cputune> <vcpupin vcpu="0" cpuset="1"/> <vcpupin vcpu="1" cpuset="5"/> <vcpupin vcpu="2" cpuset="2"/> <vcpupin vcpu="3" cpuset="6"/> </cputune> <numatune> <memory mode="strict" nodeset="1-2"/> </numatune> <cpu> <numa> <cell id="0" cpus="0-1" memory="3" unit="GiB"/> <cell id="1" cpus="2-3" memory="3" unit="GiB"/> </numa> </cpu> 9.3.5. Using Cache Allocation Technology to Improve Performance You can make use of Cache Allocation Technology (CAT) provided by the kernel on specific CPU models. This enables allocation of part of the host CPU's cache for vCPU threads, which improves real-time performance. See the following XML configuration for an example on how to configure vCPU cache allocation inside the cachetune tag: <domain> <cputune> <cachetune vcpus='0-1'> <cache id='0' level='3' type='code' size='3' unit='MiB'/> <cache id='0' level='3' type='data' size='3' unit='MiB'/> </cachetune> </cputune> </domain> The XML file above configures the thread for vCPUs 0 and 1 to have 3 MiB from the first L3 cache (level='3' id='0') allocated, once for the L3CODE and once for L3DATA. Note A single virtual machine can have multiple <cachetune> elements. For more information see cachetune in the upstream libvirt documentation . 9.3.6. Using emulatorpin Another way of tuning the domain process pinning policy is to use the <emulatorpin> tag inside of <cputune> . The <emulatorpin> tag specifies which host physical CPUs the emulator (a subset of a domain, not including vCPUs) will be pinned to. The <emulatorpin> tag provides a method of setting a precise affinity to emulator thread processes. As a result, vhost threads run on the same subset of physical CPUs and memory, and therefore benefit from cache locality. For example: <cputune> <emulatorpin cpuset="1-3"/> </cputune> Note In Red Hat Enterprise Linux 7, automatic NUMA balancing is enabled by default. Automatic NUMA balancing reduces the need for manually tuning <emulatorpin> , since the vhost-net emulator thread follows the vCPU tasks more reliably. For more information about automatic NUMA balancing, see Section 9.2, "Automatic NUMA Balancing" . 9.3.7. Tuning vCPU Pinning with virsh Important These are example commands only. You will need to substitute values according to your environment. The following example virsh command will pin the vcpu thread rhel7 which has an ID of 1 to the physical CPU 2: You can also obtain the current vcpu pinning configuration with the virsh command. For example: 9.3.8. Tuning Domain Process CPU Pinning with virsh Important These are example commands only. You will need to substitute values according to your environment. The emulatorpin option applies CPU affinity settings to threads that are associated with each domain process. For complete pinning, you must use both virsh vcpupin (as shown previously) and virsh emulatorpin for each guest. For example: 9.3.9. Tuning Domain Process Memory Policy with virsh Domain process memory can be dynamically tuned. See the following example command: More examples of these commands can be found in the virsh man page. 9.3.10. Guest NUMA Topology Guest NUMA topology can be specified using the <numa> tag inside the <cpu> tag in the guest virtual machine's XML. See the following example, and replace values accordingly: <cpu> ... <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> ... </cpu> Each <cell> element specifies a NUMA cell or a NUMA node. cpus specifies the CPU or range of CPUs that are part of the node, and memory specifies the node memory in kibibytes (blocks of 1024 bytes). Each cell or node is assigned a cellid or nodeid in increasing order starting from 0. Important When modifying the NUMA topology of a guest virtual machine with a configured topology of CPU sockets, cores, and threads, make sure that cores and threads belonging to a single socket are assigned to the same NUMA node. If threads or cores from the same socket are assigned to different NUMA nodes, the guest may fail to boot. Warning Using guest NUMA topology simultaneously with huge pages is not supported on Red Hat Enterprise Linux 7 and is only available in layered products such as Red Hat Virtualization or Red Hat OpenStack Platform . 9.3.11. NUMA Node Locality for PCI Devices When starting a new virtual machine, it is important to know both the host NUMA topology and the PCI device affiliation to NUMA nodes, so that when PCI passthrough is requested, the guest is pinned onto the correct NUMA nodes for optimal memory performance. For example, if a guest is pinned to NUMA nodes 0-1, but one of its PCI devices is affiliated with node 2, data transfer between nodes will take some time. In Red Hat Enterprise Linux 7.1 and above, libvirt reports the NUMA node locality for PCI devices in the guest XML, enabling management applications to make better performance decisions. This information is visible in the sysfs files in /sys/devices/pci*/*/numa_node . One way to verify these settings is to use the lstopo tool to report sysfs data: # lstopo-no-graphics Machine (126GB) NUMANode L#0 (P#0 63GB) Socket L#0 + L3 L#0 (20MB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#2) L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#4) L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#6) L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#8) L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#10) L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#12) L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#14) HostBridge L#0 PCIBridge PCI 8086:1521 Net L#0 "em1" PCI 8086:1521 Net L#1 "em2" PCI 8086:1521 Net L#2 "em3" PCI 8086:1521 Net L#3 "em4" PCIBridge PCI 1000:005b Block L#4 "sda" Block L#5 "sdb" Block L#6 "sdc" Block L#7 "sdd" PCIBridge PCI 8086:154d Net L#8 "p3p1" PCI 8086:154d Net L#9 "p3p2" PCIBridge PCIBridge PCIBridge PCIBridge PCI 102b:0534 GPU L#10 "card0" GPU L#11 "controlD64" PCI 8086:1d02 NUMANode L#1 (P#1 63GB) Socket L#1 + L3 L#1 (20MB) L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#1) L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#3) L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#5) L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#7) L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#9) L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#11) L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#13) L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15) HostBridge L#8 PCIBridge PCI 1924:0903 Net L#12 "p1p1" PCI 1924:0903 Net L#13 "p1p2" PCIBridge PCI 15b3:1003 Net L#14 "ib0" Net L#15 "ib1" OpenFabrics L#16 "mlx4_0" This output shows: NICs em* and disks sd* are connected to NUMA node 0 and cores 0,2,4,6,8,10,12,14. NICs p1* and ib* are connected to NUMA node 1 and cores 1,3,5,7,9,11,13,15.
[ "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128 51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076 53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116 53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 1769 463 2024 7462 10037 2672 169 7837 32434", "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080 53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120 53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118 59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 0 0 8072 0 8072 0 8114 8110 32368", "/usr/share/doc/libvirt-python-2.0.0/examples/nodestats.py NUMA stats NUMA nodes: 0 1 2 3 MemTotal: 3950 3967 3937 3943 MemFree: 66 56 42 41 Domain 'rhel7-0': Overall memory: 1536 MiB Domain 'rhel7-1': Overall memory: 2048 MiB Domain 'rhel6': Overall memory: 1024 MiB nodes 0-1 Node 0: 1024 MiB nodes 0-1 Domain 'rhel7-2': Overall memory: 4096 MiB nodes 0-3 Node 0: 1024 MiB nodes 0 Node 1: 1024 MiB nodes 1 Node 2: 1024 MiB nodes 2 Node 3: 1024 MiB nodes 3", "<vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune>", "<vcpu cpuset='0-7'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> </cputune>", "<numatune> <memory mode='strict' placement=' auto '/> </numatune>", "<numatune> <memory mode='strict' nodeset=' 0,2-3 '/> </numatune>", "<vcpu placement=' auto '>8</vcpu>", "<vcpu placement=' static ' cpuset='0-10,^5'>8</vcpu>", "<vcpu placement=' auto ' current='8'>32</vcpu>", "<cputune> <vcpupin vcpu=\"0\" cpuset=\"1-4,^2\"/> <vcpupin vcpu=\"1\" cpuset=\"0,1\"/> <vcpupin vcpu=\"2\" cpuset=\"2,3\"/> <vcpupin vcpu=\"3\" cpuset=\"0,4\"/> </cputune>", "4 available nodes (0-3) Node 0: CPUs 0 4, size 4000 MiB Node 1: CPUs 1 5, size 3999 MiB Node 2: CPUs 2 6, size 4001 MiB Node 3: CPUs 0 4, size 4005 MiB", "<cputune> <vcpupin vcpu=\"0\" cpuset=\"1\"/> <vcpupin vcpu=\"1\" cpuset=\"5\"/> <vcpupin vcpu=\"2\" cpuset=\"2\"/> <vcpupin vcpu=\"3\" cpuset=\"6\"/> </cputune> <numatune> <memory mode=\"strict\" nodeset=\"1-2\"/> </numatune> <cpu> <numa> <cell id=\"0\" cpus=\"0-1\" memory=\"3\" unit=\"GiB\"/> <cell id=\"1\" cpus=\"2-3\" memory=\"3\" unit=\"GiB\"/> </numa> </cpu>", "<domain> <cputune> <cachetune vcpus='0-1'> <cache id='0' level='3' type='code' size='3' unit='MiB'/> <cache id='0' level='3' type='data' size='3' unit='MiB'/> </cachetune> </cputune> </domain>", "<cputune> <emulatorpin cpuset=\"1-3\"/> </cputune>", "% virsh vcpupin rhel7 1 2", "% virsh vcpupin rhel7", "% virsh emulatorpin rhel7 3-4", "% virsh numatune rhel7 --nodeset 0-10", "<cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu>", "lstopo-no-graphics Machine (126GB) NUMANode L#0 (P#0 63GB) Socket L#0 + L3 L#0 (20MB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#2) L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#4) L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#6) L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#8) L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#10) L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#12) L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#14) HostBridge L#0 PCIBridge PCI 8086:1521 Net L#0 \"em1\" PCI 8086:1521 Net L#1 \"em2\" PCI 8086:1521 Net L#2 \"em3\" PCI 8086:1521 Net L#3 \"em4\" PCIBridge PCI 1000:005b Block L#4 \"sda\" Block L#5 \"sdb\" Block L#6 \"sdc\" Block L#7 \"sdd\" PCIBridge PCI 8086:154d Net L#8 \"p3p1\" PCI 8086:154d Net L#9 \"p3p2\" PCIBridge PCIBridge PCIBridge PCIBridge PCI 102b:0534 GPU L#10 \"card0\" GPU L#11 \"controlD64\" PCI 8086:1d02 NUMANode L#1 (P#1 63GB) Socket L#1 + L3 L#1 (20MB) L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#1) L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#3) L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#5) L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#7) L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#9) L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#11) L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#13) L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15) HostBridge L#8 PCIBridge PCI 1924:0903 Net L#12 \"p1p1\" PCI 1924:0903 Net L#13 \"p1p2\" PCIBridge PCI 15b3:1003 Net L#14 \"ib0\" Net L#15 \"ib1\" OpenFabrics L#16 \"mlx4_0\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_and_libvirt
Chapter 4. Portable build changes
Chapter 4. Portable build changes 4.1. Portable Linux builds of OpenJDK The portable Linux builds of OpenJDK are available with the FIPS mode. FIPS mode is also available on the RHEL OpenJDK builds. You must install NSS on the portable Linux builds if your system is running in FIPS mode. 4.2. Portable Windows builds of OpenJDK The portable Windows builds of OpenJDK are available with the FIPS mode. You do not need to install NSS on the portable Windows builds if your system is running in FIPS mode.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.13/portable-build-changes
Preface
Preface As a cluster administrator, you can configure either automatic or manual upgrade of the OpenShift AI Operator.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/upgrading_openshift_ai_self-managed_in_a_disconnected_environment/pr01
4.8. Virtualization
4.8. Virtualization Performance monitoring in KVM guests, BZ# 645365 KVM can now virtualize a performance monitoring unit (vPMU) to allow virtual machines to use performance monitoring. Note that the -cpu flag must be set when using this feature. With this feature, Red Hat virtualization customers running Red Hat Enterprise Linux 6 guests can use the CPU's PMU counter while using the performance tool for profiling. The virtual performance monitoring unit feature allows virtual machine users to identify sources of performance problems in their guests, thereby improving the ability to profile a KVM guest from the host. This feature is a Technology Preview in Red Hat Enterprise Linux 6.4. Package: kernel-2.6.32-358 Dynamic virtual CPU allocation KVM now supports dynamic virtual CPU allocation, also called vCPU hot plug, to dynamically manage capacity and react to unexpected load increases on their platforms during off-peak hours. The virtual CPU hot-plugging feature gives system administrators the ability to dynamically adjust CPU resources in a guest. Because a guest no longer has to be taken offline to adjust the CPU resources, the availability of the guest is increased. This feature is a Technology Preview in Red Hat Enterprise Linux 6.4. Currently, only the vCPU hot-add functionality works. The vCPU hot-unplug feature is not yet implemented. Package: qemu-kvm-0.12.1.2-2.355 System monitoring via SNMP, BZ# 642556 This feature provides KVM support for stable technology that is already used in data center with bare metal systems. SNMP is the standard for monitoring and is extremely well understood as well as computationally efficient. System monitoring via SNMP in Red Hat Enterprise Linux 6 allows the KVM hosts to send SNMP traps on events so that hypervisor events can be communicated to the user via standard SNMP protocol. This feature is provided through the addition of a new package: libvirt-snmp . This feature is a Technology Preview. Package: libvirt-snmp-0.0.2-3 Wire speed requirement in KVM network drivers Virtualization and cloud products that run networking work loads need to run wire speeds. Up until Red Hat Enterprise Linux 6.1, the only way to reach wire speed on a 10 GB Ethernet NIC with a lower CPU utilization was to use PCI device assignment (passthrough), which limits other features like memory overcommit and guest migration The macvtap / vhost zero-copy capabilities allow the user to use those features when high performance is required. This feature improves performance for any Red Hat Enterprise Linux 6.x guest in the VEPA use case. This feature is introduced as a Technology Preview. Package: qemu-kvm-0.12.1.2-2.355
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/virtualization_tp
2.3. Authentication Sessions
2.3. Authentication Sessions The API also provides the ability for authentication session support. An API user sends an initial request with authentication details, then sends all subsequent requests using a session cookie to authenticate. The following procedure demonstrates how to use an authenticated session. Procedure 2.3. Requesting an authenticated session Send a request with the Authorization and Prefer: persistent-auth This returns a response with the following header: Note the JSESSIONID= value. In this example the value is JSESSIONID=5dQja5ubr4yvI2MM2z+LZxrK . Send all subsequent requests with the Prefer: persistent-auth and cookie header with the JSESSIONID= value. The Authorization is no longer needed when using an authenticated session. When the session is no longer required, perform a request to the sever without the Prefer: persistent-auth header.
[ "HEAD [base] HTTP/1.1 Host: [host] Authorization: Basic cmhldm1hZG1pbkBibGFjay5xdW1yYW5ldC5jb206MTIzNDU2 Prefer: persistent-auth HTTP/1.1 200 OK", "Set-Cookie: JSESSIONID=5dQja5ubr4yvI2MM2z+LZxrK; Path=/ovirt-engine/api; Secure", "HEAD [base] HTTP/1.1 Host: [host] Prefer: persistent-auth cookie: JSESSIONID=5dQja5ubr4yvI2MM2z+LZxrK HTTP/1.1 200 OK", "HEAD [base] HTTP/1.1 Host: [host] Authorization: Basic cmhldm1hZG1pbkBibGFjay5xdW1yYW5ldC5jb206MTIzNDU2 HTTP/1.1 200 OK" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/authentication_sessions
E.2.2. GRUB and the Boot Process on UEFI-based x86 Systems
E.2.2. GRUB and the Boot Process on UEFI-based x86 Systems This section describes the specific role GRUB plays when booting a UEFI-based x86 system. For a look at the overall boot process, refer to Section F.2, "A Detailed Look at the Boot Process" . GRUB loads itself into memory in the following stages: The UEFI-based platform reads the partition table on the system storage and mounts the EFI System Partition (ESP), a VFAT partition labeled with a particular globally unique identifier (GUID). The ESP contains EFI applications such as bootloaders and utility software, stored in directories specific to software vendors. Viewed from within the Red Hat Enterprise Linux 6.9 file system, the ESP is /boot/efi/ , and EFI software provided by Red Hat is stored in /boot/efi/EFI/redhat/ . The /boot/efi/EFI/redhat/ directory contains grub.efi , a version of GRUB compiled for the EFI firmware architecture as an EFI application. In the simplest case, the EFI boot manager selects grub.efi as the default bootloader and reads it into memory. If the ESP contains other EFI applications, the EFI boot manager might prompt you to select an application to run, rather than load grub.efi automatically. GRUB determines which operating system or kernel to start, loads it into memory, and transfers control of the machine to that operating system. Because each vendor maintains its own directory of applications in the ESP, chain loading is not normally necessary on UEFI-based systems. The EFI boot manager can load any of the operating system bootloaders that are present in the ESP.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-grub-whatis-booting-uefi
19.3.2. Saving encryption keys
19.3.2. Saving encryption keys After completing the required preparation (see Section 19.3.1, "Preparation for saving encryption keys" ) it is now possible to save the encryption keys using the following procedure. Note For all examples in this file, /path/to/volume is a LUKS device, not the plaintext device contained within; blkid -s type /path/to/volume should report type ="crypto_LUKS" . Procedure 19.4. Saving encryption keys Run: Save the generated escrow-packet file in the prepared storage, associating it with the system and the volume. These steps can be performed manually, or scripted as part of system installation.
[ "volume_key --save /path/to/volume -c /path/to/cert escrow-packet" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/volume_key-organization-saving
function::is_return
function::is_return Name function::is_return - Whether the current probe context is a return probe Synopsis Arguments None Description Returns 1 if the current probe context is a return probe, returns 0 otherwise.
[ "is_return:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-is-return
Chapter 7. Dynamic plugins
Chapter 7. Dynamic plugins 7.1. Overview of dynamic plugins 7.1.1. About dynamic plugins A dynamic plugin allows you to add custom pages and other extensions to your interface at runtime. The ConsolePlugin custom resource registers plugins with the console, and a cluster administrator enables plugins in the console-operator configuration. 7.1.2. Key features A dynamic plugin allows you to make the following customizations to the OpenShift Container Platform experience: Add custom pages. Add perspectives beyond administrator and developer. Add navigation items. Add tabs and actions to resource pages. 7.1.3. General guidelines When creating your plugin, follow these general guidelines: Node.js and yarn are required to build and run your plugin. Prefix your CSS class names with your plugin name to avoid collisions. For example, my-plugin__heading and my-plugin_\_icon . Maintain a consistent look, feel, and behavior with other console pages. Follow react-i18next localization guidelines when creating your plugin. You can use the useTranslation hook like the one in the following example: conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; }; Avoid selectors that could affect markup outside of your plugins components, such as element selectors. These are not APIs and are subject to change. Using them might break your plugin. Avoid selectors like element selectors that could affect markup outside of your plugins components. PatternFly guidelines When creating your plugin, follow these guidelines for using PatternFly: Use PatternFly components and PatternFly CSS variables. Core PatternFly components are available through the SDK. Using PatternFly components and variables help your plugin look consistent in future console versions. Make your plugin accessible by following PatternFly's accessibility fundamentals . Avoid using other CSS libraries such as Bootstrap or Tailwind. They can conflict with PatternFly and will not match the console look and feel. 7.2. Getting started with dynamic plugins To get started using the dynamic plugin, you must set up your environment to write a new OpenShift Container Platform dynamic plugin. For an example of how to write a new plugin, see Adding a tab to the pods page . 7.2.1. Dynamic plugin development You can run the plugin using a local development environment. The OpenShift Container Platform web console runs in a container connected to the cluster you have logged into. Prerequisites You must have an OpenShift cluster running. You must have the OpenShift CLI ( oc ) installed. You must have yarn installed. You must have Docker v3.2.0 or newer or Podman installed and running. Procedure In your terminal, run the following command to install the dependencies for your plugin using yarn. USD yarn install After installing, run the following command to start yarn. USD yarn run start In another terminal window, login to the OpenShift Container Platform through the CLI. USD oc login Run the OpenShift Container Platform web console in a container connected to the cluster you have logged into by running the following command: USD yarn run start-console Verification Visit localhost:9000 to view the running plugin. Inspect the value of window.SERVER_FLAGS.consolePlugins to see the list of plugins which load at runtime. 7.3. Deploy your plugin on a cluster You can deploy the plugin to a OpenShift Container Platform cluster. 7.3.1. Build an image with Docker To deploy your plugin on a cluster, you need to build an image and push it to an image registry. Procedure Build the image with the following command: USD docker build -t quay.io/my-repositroy/my-plugin:latest . Optional: If you want to test your image, run the following command: USD docker run -it --rm -d -p 9001:80 quay.io/my-repository/my-plugin:latest Push the image by running the following command: USD docker push quay.io/my-repository/my-plugin:latest 7.3.2. Deploy your plugin on a cluster After pushing an image with your changes to a registry, you can deploy the plugin to a cluster. Procedure To deploy your plugin to a cluster, install a Helm chart with the name of the plugin as the Helm release name into a new namespace or an existing namespace as specified by the -n command-line option. Provide the location of the image within the plugin.image parameter by using the following command: USD helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location Where: n <my-plugin-namespace> Specifies an existing namespace to deploy your plugin into. --create-namespace Optional: If deploying to a new namespace, use this parameter. --set plugin.image=my-plugin-image-location Specifies the location of the image within the plugin.image parameter. Optional: You can specify any additional parameters by using the set of supported parameters in the charts/openshift-console-plugin/values.yaml file. plugin: name: "" description: "" image: "" imagePullPolicy: IfNotPresent replicas: 2 port: 9443 securityContext: enabled: true podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi basePath: / certificateSecretName: "" serviceAccount: create: true annotations: {} name: "" patcherServiceAccount: create: true annotations: {} name: "" jobs: patchConsoles: enabled: true image: "registry.redhat.io/openshift4/ose-tools-rhel8@sha256:e44074f21e0cca6464e50cb6ff934747e0bd11162ea01d522433a1a1ae116103" podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi Verification View the list of enabled plugins by navigating from Administration Cluster Settings Configuration Console operator.openshift.io Console plugins or by visiting the Overview page. Note It can take a few minutes for the new plugin configuration to appear. If you do not see your plugin, you might need to refresh your browser if the plugin was recently enabled. If you receive any errors at runtime, check the JS console in browser developer tools to look for any errors in your plugin code. 7.3.3. Disabling your plugin in the browser Console users can use the disable-plugins query parameter to disable specific or all dynamic plugins that would normally get loaded at run-time. Procedure To disable a specific plugin(s), remove the plugin you want to disable from the comma-separated list of plugin names. To disable all plugins, leave an empty string in the disable-plugins query parameter. Note Cluster administrators can disable plugins in the Cluster Settings page of the web console 7.3.4. Additional resources Understanding Helm 7.4. Dynamic plugin example Before working through the example, verify that the plugin is working by following the steps in Dynamic plugin development 7.4.1. Adding a tab to the pods page There are different customizations you can make to the OpenShift Container Platform web console. The following procedure adds a tab to the Pod details page as an example extension to your plugin. Note The OpenShift Container Platform web console runs in a container connected to the cluster you have logged into. See "Dynamic plugin development" for information to test the plugin before creating your own. Procedure Visit the console-plugin-template repository containing a template for creating plugins in a new tab. Important Custom plugin code is not supported by Red Hat. Only Cooperative community support is available for your plugin. Create a GitHub repository for the template by clicking Use this template Create new repository . Rename the new repository with the name of your plugin. Clone the new repository to your local machine so you can edit the code. Edit the package.json file, adding your plugin's metadata to the consolePlugin declaration. For example: "consolePlugin": { "name": "my-plugin", 1 "version": "0.0.1", 2 "displayName": "My Plugin", 3 "description": "Enjoy this shiny, new console plugin!", 4 "exposedModules": { "ExamplePage": "./components/ExamplePage" }, "dependencies": { "@console/pluginAPI": "/*" } } 1 Update the name of your plugin. 2 Update the version. 3 Update the display name for your plugin. 4 Update the description with a synopsis about your plugin. Add the following to the console-extensions.json file: { "type": "console.tab/horizontalNav", "properties": { "page": { "name": "Example Tab", "href": "example" }, "model": { "group": "core", "version": "v1", "kind": "Pod" }, "component": { "USDcodeRef": "ExampleTab" } } } Edit the package.json file to include the following changes: "exposedModules": { "ExamplePage": "./components/ExamplePage", "ExampleTab": "./components/ExampleTab" } Write a message to display on a new custom tab on the Pods page by creating a new file src/components/ExampleTab.tsx and adding the following script: import * as React from 'react'; export default function ExampleTab() { return ( <p>This is a custom tab added to a resource using a dynamic plugin.</p> ); } Install a Helm chart with the name of the plugin as the Helm release name into a new namespace or an existing namespace as specified by the -n command-line option to deploy your plugin on a cluster. Provide the location of the image within the plugin.image parameter by using the following command: USD helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location Note For more information on deploying your plugin on a cluster, see "Deploy your plugin on a cluster". Verification Visit a Pod page to view the added tab. 7.5. Dynamic plugin reference You can add extensions that allow you to customize your plugin. Those extensions are then loaded to the console at run-time. 7.5.1. Dynamic plugin extension types console.action/filter ActionFilter can be used to filter an action. Name Value Type Optional Description contextId string no The context ID helps to narrow the scope of contributed actions to a particular area of the application. Examples include topology and helm . filter CodeRef<(scope: any, action: Action) ⇒ boolean> no A function that will filter actions based on some conditions. scope : The scope in which actions should be provided for. A hook might be required if you want to remove the ModifyCount action from a deployment with a horizontal pod autoscaler (HPA). console.action/group ActionGroup contributes an action group that can also be a submenu. Name Value Type Optional Description id string no ID used to identify the action section. label string yes The label to display in the UI. Required for submenus. submenu boolean yes Whether this group should be displayed as submenu. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. The insertBefore value takes precedence. console.action/provider ActionProvider contributes a hook that returns list of actions for specific context. Name Value Type Optional Description contextId string no The context ID helps to narrow the scope of contributed actions to a particular area of the application. Examples include topology and helm . provider CodeRef<ExtensionHook<Action[], any>> no A React hook that returns actions for the given scope. If contextId = resource , then the scope will always be a Kubernetes resource object. console.action/resource-provider ResourceActionProvider contributes a hook that returns list of actions for specific resource model. Name Value Type Optional Description model ExtensionK8sKindVersionModel no The model for which this provider provides actions for. provider CodeRef<ExtensionHook<Action[], any>> no A react hook which returns actions for the given resource model console.alert-action This extension can be used to trigger a specific action when a specific Prometheus alert is observed by the Console based on its rule.name value. Name Value Type Optional Description alert string no Alert name as defined by alert.rule.name property text string no action CodeRef<(alert: any) ⇒ void> no Function to perform side effect console.catalog/item-filter This extension can be used for plugins to contribute a handler that can filter specific catalog items. For example, the plugin can contribute a filter that filters helm charts from specific provider. Name Value Type Optional Description catalogId string | string[] no The unique identifier for the catalog this provider contributes to. type string no Type ID for the catalog item type. filter CodeRef<(item: CatalogItem) ⇒ boolean> no Filters items of a specific type. Value is a function that takes CatalogItem[] and returns a subset based on the filter criteria. console.catalog/item-metadata This extension can be used to contribute a provider that adds extra metadata to specific catalog items. Name Value Type Optional Description catalogId string | string[] no The unique identifier for the catalog this provider contributes to. type string no Type ID for the catalog item type. provider CodeRef<ExtensionHook<CatalogItemMetadataProviderFunction, CatalogExtensionHookOptions>> no A hook which returns a function that will be used to provide metadata to catalog items of a specific type. console.catalog/item-provider This extension allows plugins to contribute a provider for a catalog item type. For example, a Helm Plugin can add a provider that fetches all the Helm Charts. This extension can also be used by other plugins to add more items to a specific catalog item type. Name Value Type Optional Description catalogId string | string[] no The unique identifier for the catalog this provider contributes to. type string no Type ID for the catalog item type. title string no Title for the catalog item provider provider CodeRef<ExtensionHook<CatalogItem<any>[], CatalogExtensionHookOptions>> no Fetch items and normalize it for the catalog. Value is a react effect hook. priority number yes Priority for this provider. Defaults to 0 . Higher priority providers may override catalog items provided by other providers. console.catalog/item-type This extension allows plugins to contribute a new type of catalog item. For example, a Helm plugin can define a new catalog item type as HelmCharts that it wants to contribute to the Developer Catalog. Name Value Type Optional Description type string no Type for the catalog item. title string no Title for the catalog item. catalogDescription string | CodeRef<React.ReactNode> yes Description for the type specific catalog. typeDescription string yes Description for the catalog item type. filters CatalogItemAttribute[] yes Custom filters specific to the catalog item. groupings CatalogItemAttribute[] yes Custom groupings specific to the catalog item. console.catalog/item-type-metadata This extension allows plugins to contribute extra metadata like custom filters or groupings for any catalog item type. For example, a plugin can attach a custom filter for HelmCharts that can filter based on chart provider. Name Value Type Optional Description type string no Type for the catalog item. filters CatalogItemAttribute[] yes Custom filters specific to the catalog item. groupings CatalogItemAttribute[] yes Custom groupings specific to the catalog item. console.cluster-overview/inventory-item Adds a new inventory item into cluster overview page. Name Value Type Optional Description component CodeRef<React.ComponentType<{}>> no The component to be rendered. console.cluster-overview/multiline-utilization-item Adds a new cluster overview multi-line utilization item. Name Value Type Optional Description title string no The title of the utilization item. getUtilizationQueries CodeRef<GetMultilineQueries> no Prometheus utilization query. humanize CodeRef<Humanize> no Convert Prometheus data to human-readable form. TopConsumerPopovers CodeRef<React.ComponentType<TopConsumerPopoverProps>[]> yes Shows Top consumer popover instead of plain value. console.cluster-overview/utilization-item Adds a new cluster overview utilization item. Name Value Type Optional Description title string no The title of the utilization item. getUtilizationQuery CodeRef<GetQuery> no Prometheus utilization query. humanize CodeRef<Humanize> no Convert Prometheus data to human-readable form. getTotalQuery CodeRef<GetQuery> yes Prometheus total query. getRequestQuery CodeRef<GetQuery> yes Prometheus request query. getLimitQuery CodeRef<GetQuery> yes Prometheus limit query. TopConsumerPopover CodeRef<React.ComponentType<TopConsumerPopoverProps>> yes Shows Top consumer popover instead of plain value. console.context-provider Adds a new React context provider to the web console application root. Name Value Type Optional Description provider CodeRef<Provider<T>> no Context Provider component. useValueHook CodeRef<() ⇒ T> no Hook for the Context value. console.dashboards/card Adds a new dashboard card. Name Value Type Optional Description tab string no The ID of the dashboard tab to which the card will be added. position 'LEFT' | 'RIGHT' | 'MAIN' no The grid position of the card on the dashboard. component CodeRef<React.ComponentType<{}>> no Dashboard card component. span OverviewCardSpan yes Card's vertical span in the column. Ignored for small screens; defaults to 12 . console.dashboards/custom/overview/detail/item Adds an item to the Details card of Overview Dashboard. Name Value Type Optional Description title string no Details card title component CodeRef<React.ComponentType<{}>> no The value, rendered by the OverviewDetailItem component valueClassName string yes Value for a className isLoading CodeRef<() ⇒ boolean> yes Function returning the loading state of the component error CodeRef<() ⇒ string> yes Function returning errors to be displayed by the component console.dashboards/overview/activity/resource Adds an activity to the Activity Card of Overview Dashboard where the triggering of activity is based on watching a Kubernetes resource. Name Value Type Optional Description k8sResource CodeRef<FirehoseResource & { isList: true; }> no The utilization item to be replaced. component CodeRef<React.ComponentType<K8sActivityProps<T>>> no The action component. isActivity CodeRef<(resource: T) ⇒ boolean> yes Function which determines if the given resource represents the action. If not defined, every resource represents activity. getTimestamp CodeRef<(resource: T) ⇒ Date> yes Time stamp for the given action, which will be used for ordering. console.dashboards/overview/health/operator Adds a health subsystem to the status card of the Overview dashboard, where the source of status is a Kubernetes REST API. Name Value Type Optional Description title string no Title of Operators section in the pop-up menu. resources CodeRef<FirehoseResource[]> no Kubernetes resources which will be fetched and passed to healthHandler . getOperatorsWithStatuses CodeRef<GetOperatorsWithStatuses<T>> yes Resolves status for the Operators. operatorRowLoader CodeRef<React.ComponentType<OperatorRowProps<T>>> yes Loader for pop-up row component. viewAllLink string yes Links to all resources page. If not provided, then a list page of the first resource from resources prop is used. console.dashboards/overview/health/prometheus Adds a health subsystem to the status card of Overview dashboard where the source of status is Prometheus. Name Value Type Optional Description title string no The display name of the subsystem. queries string[] no The Prometheus queries. healthHandler CodeRef<PrometheusHealthHandler> no Resolve the subsystem's health. additionalResource CodeRef<FirehoseResource> yes Additional resource which will be fetched and passed to healthHandler . popupComponent CodeRef<React.ComponentType<PrometheusHealthPopupProps>> yes Loader for pop-up menu content. If defined, a health item is represented as a link, which opens a pop-up menu with the given content. popupTitle string yes The title of the popover. disallowedControlPlaneTopology string[] yes Control plane topology for which the subsystem should be hidden. console.dashboards/overview/health/resource Adds a health subsystem to the status card of Overview dashboard where the source of status is a Kubernetes Resource. Name Value Type Optional Description title string no The display name of the subsystem. resources CodeRef<WatchK8sResources<T>> no Kubernetes resources that will be fetched and passed to healthHandler . healthHandler CodeRef<ResourceHealthHandler<T>> no Resolve the subsystem's health. popupComponent CodeRef<WatchK8sResults<T>> yes Loader for pop-up menu content. If defined, a health item is represented as a link, which opens a pop-up menu with the given content. popupTitle string yes The title of the popover. console.dashboards/overview/health/url Adds a health subsystem to the status card of Overview dashboard where the source of status is a Kubernetes REST API. Name Value Type Optional Description title string no The display name of the subsystem. url string no The URL to fetch data from. It will be prefixed with base Kubernetes URL. healthHandler CodeRef<URLHealthHandler<T, K8sResourceCommon | K8sResourceCommon[]>> no Resolve the subsystem's health. additionalResource CodeRef<FirehoseResource> yes Additional resource which will be fetched and passed to healthHandler . popupComponent CodeRef<React.ComponentType<{ healthResult?: T; healthResultError?: any; k8sResult?: FirehoseResult<R>; }>> yes Loader for popup content. If defined, a health item will be represented as a link which opens popup with given content. popupTitle string yes The title of the popover. console.dashboards/overview/inventory/item Adds a resource tile to the overview inventory card. Name Value Type Optional Description model CodeRef<T> no The model for resource which will be fetched. Used to get the model's label or abbr . mapper CodeRef<StatusGroupMapper<T, R>> yes Function which maps various statuses to groups. additionalResources CodeRef<WatchK8sResources<R>> yes Additional resources which will be fetched and passed to the mapper function. console.dashboards/overview/inventory/item/group Adds an inventory status group. Name Value Type Optional Description id string no The ID of the status group. icon CodeRef<React.ReactElement<any, string | React.JSXElementConstructor<any>>> no React component representing the status group icon. console.dashboards/overview/inventory/item/replacement Replaces an overview inventory card. Name Value Type Optional Description model CodeRef<T> no The model for resource which will be fetched. Used to get the model's label or abbr . mapper CodeRef<StatusGroupMapper<T, R>> yes Function which maps various statuses to groups. additionalResources CodeRef<WatchK8sResources<R>> yes Additional resources which will be fetched and passed to the mapper function. console.dashboards/overview/prometheus/activity/resource Adds an activity to the Activity Card of Prometheus Overview Dashboard where the triggering of activity is based on watching a Kubernetes resource. Name Value Type Optional Description queries string[] no Queries to watch. component CodeRef<React.ComponentType<PrometheusActivityProps>> no The action component. isActivity CodeRef<(results: PrometheusResponse[]) ⇒ boolean> yes Function which determines if the given resource represents the action. If not defined, every resource represents activity. console.dashboards/project/overview/item Adds a resource tile to the project overview inventory card. Name Value Type Optional Description model CodeRef<T> no The model for resource which will be fetched. Used to get the model's label or abbr . mapper CodeRef<StatusGroupMapper<T, R>> yes Function which maps various statuses to groups. additionalResources CodeRef<WatchK8sResources<R>> yes Additional resources which will be fetched and passed to the mapper function. console.dashboards/tab Adds a new dashboard tab, placed after the Overview tab. Name Value Type Optional Description id string no A unique tab identifier, used as tab link href and when adding cards to this tab. navSection 'home' | 'storage' no Navigation section to which the tab belongs to. title string no The title of the tab. console.file-upload This extension can be used to provide a handler for the file drop action on specific file extensions. Name Value Type Optional Description fileExtensions string[] no Supported file extensions. handler CodeRef<FileUploadHandler> no Function which handles the file drop action. console.flag Gives full control over the web console feature flags. Name Value Type Optional Description handler CodeRef<FeatureFlagHandler> no Used to set or unset arbitrary feature flags. console.flag/hookProvider Gives full control over the web console feature flags with hook handlers. Name Value Type Optional Description handler CodeRef<FeatureFlagHandler> no Used to set or unset arbitrary feature flags. console.flag/model Adds a new web console feature flag driven by the presence of a CustomResourceDefinition (CRD) object on the cluster. Name Value Type Optional Description flag string no The name of the flag to set after the CRD is detected. model ExtensionK8sModel no The model which refers to a CRD. console.global-config This extension identifies a resource used to manage the configuration of the cluster. A link to the resource will be added to the Administration Cluster Settings Configuration page. Name Value Type Optional Description id string no Unique identifier for the cluster config resource instance. name string no The name of the cluster config resource instance. model ExtensionK8sModel no The model which refers to a cluster config resource. namespace string no The namespace of the cluster config resource instance. console.model-metadata Customize the display of models by overriding values retrieved and generated through API discovery. Name Value Type Optional Description model ExtensionK8sGroupModel no The model to customize. May specify only a group, or optional version and kind. badge ModelBadge yes Whether to consider this model reference as Technology Preview or Developer Preview. color string yes The color to associate to this model. label string yes Override the label. Requires kind be provided. labelPlural string yes Override the plural label. Requires kind be provided. abbr string yes Customize the abbreviation. Defaults to all uppercase characters in kind , up to 4 characters long. Requires that kind is provided. console.navigation/href This extension can be used to contribute a navigation item that points to a specific link in the UI. Name Value Type Optional Description id string no A unique identifier for this item. name string no The name of this item. href string no The link href value. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. startsWith string[] yes Mark this item as active when the URL starts with one of these paths. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. namespaced boolean yes If true , adds /ns/active-namespace to the end. prefixNamespaced boolean yes If true , adds /k8s/ns/active-namespace to the beginning. console.navigation/resource-cluster This extension can be used to contribute a navigation item that points to a cluster resource details page. The K8s model of that resource can be used to define the navigation item. Name Value Type Optional Description id string no A unique identifier for this item. model ExtensionK8sModel no The model for which this navigation item links to. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top-level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. startsWith string[] yes Mark this item as active when the URL starts with one of these paths. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. name string yes Overrides the default name. If not supplied the name of the link will equal the plural value of the model. console.navigation/resource-ns This extension can be used to contribute a navigation item that points to a namespaced resource details page. The K8s model of that resource can be used to define the navigation item. Name Value Type Optional Description id string no A unique identifier for this item. model ExtensionK8sModel no The model for which this navigation item links to. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top-level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. startsWith string[] yes Mark this item as active when the URL starts with one of these paths. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. name string yes Overrides the default name. If not supplied the name of the link will equal the plural value of the model. console.navigation/section This extension can be used to define a new section of navigation items in the navigation tab. Name Value Type Optional Description id string no A unique identifier for this item. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. name string yes Name of this section. If not supplied, only a separator will be shown above the section. console.navigation/separator This extension can be used to add a separator between navigation items in the navigation. Name Value Type Optional Description id string no A unique identifier for this item. perspective string yes The perspective ID to which this item belongs to. If not specified, contributes to the default perspective. section string yes Navigation section to which this item belongs to. If not specified, render this item as a top level link. dataAttributes { [key: string]: string; } yes Adds data attributes to the DOM. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. insertBefore takes precedence. console.page/resource/details Name Value Type Optional Description model ExtensionK8sGroupKindModel no The model for which this resource page links to. component CodeRef<React.ComponentType<{ match: match<{}>; namespace: string; model: ExtensionK8sModel; }>> no The component to be rendered when the route matches. console.page/resource/list Adds new resource list page to Console router. Name Value Type Optional Description model ExtensionK8sGroupKindModel no The model for which this resource page links to. component CodeRef<React.ComponentType<{ match: match<{}>; namespace: string; model: ExtensionK8sModel; }>> no The component to be rendered when the route matches. console.page/route Adds a new page to the web console router. See React Router . Name Value Type Optional Description component CodeRef<React.ComponentType<RouteComponentProps<{}, StaticContext, any>>> no The component to be rendered when the route matches. path string | string[] no Valid URL path or array of paths that path-to-regexp@^1.7.0 understands. perspective string yes The perspective to which this page belongs to. If not specified, contributes to all perspectives. exact boolean yes When true, will only match if the path matches the location.pathname exactly. console.page/route/standalone Adds a new standalone page, rendered outside the common page layout, to the web console router. See React Router . Name Value Type Optional Description component CodeRef<React.ComponentType<RouteComponentProps<{}, StaticContext, any>>> no The component to be rendered when the route matches. path string | string[] no Valid URL path or array of paths that path-to-regexp@^1.7.0 understands. exact boolean yes When true, will only match if the path matches the location.pathname exactly. console.perspective This extension contributes a new perspective to the console, which enables customization of the navigation menu. Name Value Type Optional Description id string no The perspective identifier. name string no The perspective display name. icon CodeRef<LazyComponent> no The perspective display icon. landingPageURL CodeRef<(flags: { [key: string]: boolean; }, isFirstVisit: boolean) ⇒ string> no The function to get perspective landing page URL. importRedirectURL CodeRef<(namespace: string) ⇒ string> no The function to get redirect URL for import flow. default boolean yes Whether the perspective is the default. There can only be one default. defaultPins ExtensionK8sModel[] yes Default pinned resources on the nav usePerspectiveDetection CodeRef<() ⇒ [boolean, boolean]> yes The hook to detect default perspective console.project-overview/inventory-item Adds a new inventory item into the Project Overview page. Name Value Type Optional Description component CodeRef<React.ComponentType<{ projectName: string; }>> no The component to be rendered. console.project-overview/utilization-item Adds a new project overview utilization item. Name Value Type Optional Description title string no The title of the utilization item. getUtilizationQuery CodeRef<GetProjectQuery> no Prometheus utilization query. humanize CodeRef<Humanize> no Convert Prometheus data to human-readable form. getTotalQuery CodeRef<GetProjectQuery> yes Prometheus total query. getRequestQuery CodeRef<GetProjectQuery> yes Prometheus request query. getLimitQuery CodeRef<GetProjectQuery> yes Prometheus limit query. TopConsumerPopover CodeRef<React.ComponentType<TopConsumerPopoverProps>> yes Shows the top consumer popover instead of plain value. console.pvc/alert This extension can be used to contribute custom alerts on the PVC details page. Name Value Type Optional Description alert CodeRef<React.ComponentType<{ pvc: K8sResourceCommon; }>> no The alert component. console.pvc/create-prop This extension can be used to specify additional properties that will be used when creating PVC resources on the PVC list page. Name Value Type Optional Description label string no Label for the create prop action. path string no Path for the create prop action. console.pvc/delete This extension allows hooking into deleting PVC resources. It can provide an alert with additional information and custom PVC delete logic. Name Value Type Optional Description predicate CodeRef<(pvc: K8sResourceCommon) ⇒ boolean> no Predicate that tells whether to use the extension or not. onPVCKill CodeRef<(pvc: K8sResourceCommon) ⇒ Promise<void>> no Method for the PVC delete operation. alert CodeRef<React.ComponentType<{ pvc: K8sResourceCommon; }>> no Alert component to show additional information. console.pvc/status Name Value Type Optional Description priority number no Priority for the status component. A larger value means higher priority. status CodeRef<React.ComponentType<{ pvc: K8sResourceCommon; }>> no The status component. predicate CodeRef<(pvc: K8sResourceCommon) ⇒ boolean> no Predicate that tells whether to render the status component or not. console.redux-reducer Adds new reducer to Console Redux store which operates on plugins.<scope> substate. Name Value Type Optional Description scope string no The key to represent the reducer-managed substate within the Redux state object. reducer CodeRef<Reducer<any, AnyAction>> no The reducer function, operating on the reducer-managed substate. console.resource/create This extension allows plugins to provide a custom component (i.e., wizard or form) for specific resources, which will be rendered, when users try to create a new resource instance. Name Value Type Optional Description model ExtensionK8sModel no The model for which this create resource page will be rendered component CodeRef<React.ComponentType<CreateResourceComponentProps>> no The component to be rendered when the model matches console.storage-class/provisioner Adds a new storage class provisioner as an option during storage class creation. Name Value Type Optional Description CSI ProvisionerDetails yes Container Storage Interface provisioner type OTHERS ProvisionerDetails yes Other provisioner type console.storage-provider This extension can be used to contribute a new storage provider to select, when attaching storage and a provider specific component. Name Value Type Optional Description name string no Displayed name of the provider. Component CodeRef<React.ComponentType<Partial<RouteComponentProps<{}, StaticContext, any>>>> no Provider specific component to render. console.tab Adds a tab to a horizontal nav matching the contextId . Name Value Type Optional Description contextId string no Context ID assigned to the horizontal nav in which the tab will be injected. Possible values: dev-console-observe name string no The display label of the tab href string no The href appended to the existing URL component CodeRef<React.ComponentType<PageComponentProps<K8sResourceCommon>>> no Tab content component. console.tab/horizontalNav This extension can be used to add a tab on the resource details page. Name Value Type Optional Description model ExtensionK8sKindVersionModel no The model for which this provider show tab. page { name: string; href: string; } no The page to be show in horizontal tab. It takes tab name as name and href of the tab component CodeRef<React.ComponentType<PageComponentProps<K8sResourceCommon>>> no The component to be rendered when the route matches. console.telemetry/listener This component can be used to register a listener function receiving telemetry events. These events include user identification, page navigation, and other application specific events. The listener may use this data for reporting and analytics purposes. Name Value Type Optional Description listener CodeRef<TelemetryEventListener> no Listen for telemetry events console.topology/adapter/build BuildAdapter contributes an adapter to adapt element to data that can be used by the Build component. Name Value Type Optional Description adapt CodeRef<(element: GraphElement) ⇒ AdapterDataType<BuildConfigData> | undefined> no Adapter to adapt element to data that can be used by Build component. console.topology/adapter/network NetworkAdapater contributes an adapter to adapt element to data that can be used by the Networking component. Name Value Type Optional Description adapt CodeRef<(element: GraphElement) ⇒ NetworkAdapterType | undefined> no Adapter to adapt element to data that can be used by Networking component. console.topology/adapter/pod PodAdapter contributes an adapter to adapt element to data that can be used by the Pod component. Name Value Type Optional Description adapt CodeRef<(element: GraphElement) ⇒ AdapterDataType<PodsAdapterDataType> | undefined> no Adapter to adapt element to data that can be used by Pod component. console.topology/component/factory Getter for a ViewComponentFactory . Name Value Type Optional Description getFactory CodeRef<ViewComponentFactory> no Getter for a ViewComponentFactory . console.topology/create/connector Getter for the create connector function. Name Value Type Optional Description getCreateConnector CodeRef<CreateConnectionGetter> no Getter for the create connector function. console.topology/data/factory Topology Data Model Factory Extension Name Value Type Optional Description id string no Unique ID for the factory. priority number no Priority for the factory resources WatchK8sResourcesGeneric yes Resources to be fetched from useK8sWatchResources hook. workloadKeys string[] yes Keys in resources containing workloads. getDataModel CodeRef<TopologyDataModelGetter> yes Getter for the data model factory. isResourceDepicted CodeRef<TopologyDataModelDepicted> yes Getter for function to determine if a resource is depicted by this model factory. getDataModelReconciler CodeRef<TopologyDataModelReconciler> yes Getter for function to reconcile data model after all extensions' models have loaded. console.topology/decorator/provider Topology Decorator Provider Extension Name Value Type Optional Description id string no ID for topology decorator specific to the extension priority number no Priority for topology decorator specific to the extension quadrant TopologyQuadrant no Quadrant for topology decorator specific to the extension decorator CodeRef<TopologyDecoratorGetter> no Decorator specific to the extension console.topology/details/resource-alert DetailsResourceAlert contributes an alert for specific topology context or graph element. Name Value Type Optional Description id string no The ID of this alert. Used to save state if the alert should not be shown after dismissed. contentProvider CodeRef<(element: GraphElement) ⇒ DetailsResourceAlertContent | null> no Hook to return the contents of the alert. console.topology/details/resource-link DetailsResourceLink contributes a link for specific topology context or graph element. Name Value Type Optional Description link CodeRef<(element: GraphElement) ⇒ React.Component | undefined> no Return the resource link if provided, otherwise undefined. Use the ResourceIcon and ResourceLink properties for styles. priority number yes A higher priority factory will get the first chance to create the link. console.topology/details/tab DetailsTab contributes a tab for the topology details panel. Name Value Type Optional Description id string no A unique identifier for this details tab. label string no The tab label to display in the UI. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. The insertBefore value takes precedence. console.topology/details/tab-section DetailsTabSection contributes a section for a specific tab in the topology details panel. Name Value Type Optional Description id string no A unique identifier for this details tab section. tab string no The parent tab ID that this section should contribute to. provider CodeRef<DetailsTabSectionExtensionHook> no A hook that returns a component, or if null or undefined, renders in the topology sidebar. SDK component: <Section title=\{}>... padded area section CodeRef<(element: GraphElement, renderNull?: () ⇒ null) ⇒ React.Component | undefined> no Deprecated: Fallback if no provider is defined. renderNull is a no-op already. insertBefore string | string[] yes Insert this item before the item referenced here. For arrays, the first one found in order is used. insertAfter string | string[] yes Insert this item after the item referenced here. For arrays, the first one found in order is used. The insertBefore value takes precedence. console.topology/display/filters Topology Display Filters Extension Name Value Type Optional Description getTopologyFilters CodeRef<() ⇒ TopologyDisplayOption[]> no Getter for topology filters specific to the extension applyDisplayOptions CodeRef<TopologyApplyDisplayOptions> no Function to apply filters to the model console.topology/relationship/provider Topology relationship provider connector extension Name Value Type Optional Description provides CodeRef<RelationshipProviderProvides> no Use to determine if a connection can be created between the source and target node tooltip string no Tooltip to show when connector operation is hovering over the drop target, for example, "Create a Visual Connector" create CodeRef<RelationshipProviderCreate> no Callback to execute when connector is drop over target node to create a connection priority number no Priority for relationship, higher will be preferred in case of multiple console.user-preference/group This extension can be used to add a group on the console user-preferences page. It will appear as a vertical tab option on the console user-preferences page. Name Value Type Optional Description id string no ID used to identify the user preference group. label string no The label of the user preference group insertBefore string yes ID of user preference group before which this group should be placed insertAfter string yes ID of user preference group after which this group should be placed console.user-preference/item This extension can be used to add an item to the user preferences group on the console user preferences page. Name Value Type Optional Description id string no ID used to identify the user preference item and referenced in insertAfter and insertBefore to define the item order label string no The label of the user preference description string no The description of the user preference field UserPreferenceField no The input field options used to render the values to set the user preference groupId string yes IDs used to identify the user preference groups the item would belong to insertBefore string yes ID of user preference item before which this item should be placed insertAfter string yes ID of user preference item after which this item should be placed console.yaml-template YAML templates for editing resources via the yaml editor. Name Value Type Optional Description model ExtensionK8sModel no Model associated with the template. template CodeRef<string> no The YAML template. name string no The name of the template. Use the name default to mark this as the default template. dev-console.add/action This extension allows plugins to contribute an add action item to the add page of developer perspective. For example, a Serverless plugin can add a new action item for adding serverless functions to the add page of developer console. Name Value Type Optional Description id string no ID used to identify the action. label string no The label of the action. description string no The description of the action. href string no The href to navigate to. groupId string yes IDs used to identify the action groups the action would belong to. icon CodeRef<React.ReactNode> yes The perspective display icon. accessReview AccessReviewResourceAttributes[] yes Optional access review to control the visibility or enablement of the action. dev-console.add/action-group This extension allows plugins to contibute a group in the add page of developer console. Groups can be referenced by actions, which will be grouped together in the add action page based on their extension definition. For example, a Serverless plugin can contribute a Serverless group and together with multiple add actions. Name Value Type Optional Description id string no ID used to identify the action group name string no The title of the action group insertBefore string yes ID of action group before which this group should be placed insertAfter string yes ID of action group after which this group should be placed dev-console.import/environment This extension can be used to specify extra build environment variable fields under the builder image selector in the developer console git import form. When set, the fields will override environment variables of the same name in the build section. Name Value Type Optional Description imageStreamName string no Name of the image stream to provide custom environment variables for imageStreamTags string[] no List of supported image stream tags environments ImageEnvironment[] no List of environment variables console.dashboards/overview/detail/item Deprecated. use CustomOverviewDetailItem type instead Name Value Type Optional Description component CodeRef<React.ComponentType<{}>> no The value, based on the DetailItem component console.page/resource/tab Deprecated. Use console.tab/horizontalNav instead. Adds a new resource tab page to Console router. Name Value Type Optional Description model ExtensionK8sGroupKindModel no The model for which this resource page links to. component CodeRef<React.ComponentType<RouteComponentProps<{}, StaticContext, any>>> no The component to be rendered when the route matches. name string no The name of the tab. href string yes The optional href for the tab link. If not provided, the first path is used. exact boolean yes When true, will only match if the path matches the location.pathname exactly. 7.5.2. OpenShift Container Platform console API useActivePerspective Hook that provides the currently active perspective and a callback for setting the active perspective. It returns a tuple containing the current active perspective and setter callback. Example const Component: React.FC = (props) => { const [activePerspective, setActivePerspective] = useActivePerspective(); return <select value={activePerspective} onChange={(e) => setActivePerspective(e.target.value)} > { // ...perspective options } </select> } GreenCheckCircleIcon Component for displaying a green check mark circle icon. Example <GreenCheckCircleIcon title="Healthy" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ( sm , md , lg , xl ) RedExclamationCircleIcon Component for displaying a red exclamation mark circle icon. Example <RedExclamationCircleIcon title="Failed" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ( sm , md , lg , xl ) YellowExclamationTriangleIcon Component for displaying a yellow triangle exclamation icon. Example <YellowExclamationTriangleIcon title="Warning" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ( sm , md , lg , xl ) BlueInfoCircleIcon Component for displaying a blue info circle icon. Example <BlueInfoCircleIcon title="Info" /> Parameter Name Description className (optional) additional class name for the component title (optional) icon title size (optional) icon size: ('sm', 'md', 'lg', 'xl') ErrorStatus Component for displaying an error status popover. Example <ErrorStatus title={errorMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover InfoStatus Component for displaying an information status popover. Example <InfoStatus title={infoMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover ProgressStatus Component for displaying a progressing status popover. Example <ProgressStatus title={progressMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover SuccessStatus Component for displaying a success status popover. Example <SuccessStatus title={successMsg} /> Parameter Name Description title (optional) status text iconOnly (optional) if true, only displays icon noTooltip (optional) if true, tooltip won't be displayed className (optional) additional class name for the component popoverTitle (optional) title for popover checkAccess Provides information about user access to a given resource. It returns an object with resource access information. Parameter Name Description resourceAttributes resource attributes for access review impersonate impersonation details useAccessReview Hook that provides information about user access to a given resource. It returns an array with isAllowed and loading values. Parameter Name Description resourceAttributes resource attributes for access review impersonate impersonation details useResolvedExtensions React hook for consuming Console extensions with resolved CodeRef properties. This hook accepts the same argument(s) as useExtensions hook and returns an adapted list of extension instances, resolving all code references within each extension's properties. Initially, the hook returns an empty array. After the resolution is complete, the React component is re-rendered with the hook returning an adapted list of extensions. When the list of matching extensions changes, the resolution is restarted. The hook will continue to return the result until the resolution completes. The hook's result elements are guaranteed to be referentially stable across re-renders. It returns a tuple containing a list of adapted extension instances with resolved code references, a boolean flag indicating whether the resolution is complete, and a list of errors detected during the resolution. Example const [navItemExtensions, navItemsResolved] = useResolvedExtensions<NavItem>(isNavItem); // process adapted extensions and render your component Parameter Name Description typeGuards A list of callbacks that each accept a dynamic plugin extension as an argument and return a boolean flag indicating whether or not the extension meets desired type constraints HorizontalNav A component that creates a Navigation bar for a page. Routing is handled as part of the component. console.tab/horizontalNav can be used to add additional content to any horizontal navigation. Example const HomePage: React.FC = (props) => { const page = { href: '/home', name: 'Home', component: () => <>Home</> } return <HorizontalNav match={props.match} pages={[page]} /> } Parameter Name Description resource The resource associated with this Navigation, an object of K8sResourceCommon type pages An array of page objects match match object provided by React Router VirtualizedTable A component for making virtualized tables. Example const MachineList: React.FC<MachineListProps> = (props) => { return ( <VirtualizedTable<MachineKind> {...props} aria-label='Machines' columns={getMachineColumns} Row={getMachineTableRow} /> ); } Parameter Name Description data data for table loaded flag indicating data is loaded loadError error object if issue loading data columns column setup Row row setup unfilteredData original data without filter NoDataEmptyMsg (optional) no data empty message component EmptyMsg (optional) empty message component scrollNode (optional) function to handle scroll label (optional) label for table ariaLabel (optional) aria label gridBreakPoint sizing of how to break up grid for responsiveness onSelect (optional) function for handling select of table rowData (optional) data specific to row TableData Component for displaying table data within a table row. Example const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs }) => { return ( <> <TableData id={columns[0].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind="Pod" name={obj.metadata.name} namespace={obj.metadata.namespace} /> </TableData> <TableData id={columns[1].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind="Namespace" name={obj.metadata.namespace} /> </TableData> </> ); }; Parameter Name Description id unique ID for table activeColumnIDs active columns className (optional) option class name for styling useActiveColumns A hook that provides a list of user-selected active TableColumns. Example // See implementation for more details on TableColumn type const [activeColumns, userSettingsLoaded] = useActiveColumns({ columns, showNamespaceOverride: false, columnManagementID, }); return userSettingsAreLoaded ? <VirtualizedTable columns={activeColumns} {...otherProps} /> : null Parameter Name Description options Which are passed as a key-value map \{TableColumn[]} options.columns An array of all available TableColumns {boolean} [options.showNamespaceOverride] (optional) If true, a namespace column will be included, regardless of column management selections {string} [options.columnManagementID] (optional) A unique ID used to persist and retrieve column management selections to and from user settings. Usually a group/version/kind (GVK) string for a resource. A tuple containing the current user selected active columns (a subset of options.columns), and a boolean flag indicating whether user settings have been loaded. ListPageHeader Component for generating a page header. Example const exampleList: React.FC = () => { return ( <> <ListPageHeader title="Example List Page"/> </> ); }; Parameter Name Description title heading title helpText (optional) help section as react node badge (optional) badge icon as react node ListPageCreate Component for adding a create button for a specific resource kind that automatically generates a link to the create YAML for this resource. Example const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreate groupVersionKind="Pod">Create Pod</ListPageCreate> </ListPageHeader> </> ); }; Parameter Name Description groupVersionKind the resource group/version/kind to represent ListPageCreateLink Component for creating a stylized link. Example const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreateLink to={'/link/to/my/page'}>Create Item</ListPageCreateLink> </ListPageHeader> </> ); }; Parameter Name Description to string location where link should direct createAccessReview (optional) object with namespace and kind used to determine access children (optional) children for the component ListPageCreateButton Component for creating button. Example const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreateButton createAccessReview={access}>Create Pod</ListPageCreateButton> </ListPageHeader> </> ); }; Parameter Name Description createAccessReview (optional) object with namespace and kind used to determine access pfButtonProps (optional) Patternfly Button props ListPageCreateDropdown Component for creating a dropdown wrapped with permissions check. Example const exampleList: React.FC<MyProps> = () => { const items = { SAVE: 'Save', DELETE: 'Delete', } return ( <> <ListPageHeader title="Example Pod List Page"/> <ListPageCreateDropdown createAccessReview={access} items={items}>Actions</ListPageCreateDropdown> </ListPageHeader> </> ); }; Parameter Name Description items key:ReactNode pairs of items to display in dropdown component onClick callback function for click on dropdown items createAccessReview (optional) object with namespace and kind used to determine access children (optional) children for the dropdown toggle ListPageFilter Component that generates filter for list page. Example // See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> ) Parameter Name Description data An array of data points loaded indicates that data has loaded onFilterChange callback function for when filter is updated rowFilters (optional) An array of RowFilter elements that define the available filter options nameFilterPlaceholder (optional) placeholder for name filter labelFilterPlaceholder (optional) placeholder for label filter hideLabelFilter (optional) only shows the name filter instead of both name and label filter hideNameLabelFilter (optional) hides both name and label filter columnLayout (optional) column layout object hideColumnManagement (optional) flag to hide the column management useListPageFilter A hook that manages filter state for the ListPageFilter component. It returns a tuple containing the data filtered by all static filters, the data filtered by all static and row filters, and a callback that updates rowFilters. Example // See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> ) Parameter Name Description data An array of data points rowFilters (optional) An array of RowFilter elements that define the available filter options staticFilters (optional) An array of FilterValue elements that are statically applied to the data ResourceLink Component that creates a link to a specific resource type with an icon badge. Example <ResourceLink kind="Pod" name="testPod" title={metadata.uid} /> Parameter Name Description kind (optional) the kind of resource i.e. Pod, Deployment, Namespace groupVersionKind (optional) object with group, version, and kind className (optional) class style for component displayName (optional) display name for component, overwrites the resource name if set inline (optional) flag to create icon badge and name inline with children linkTo (optional) flag to create a Link object - defaults to true name (optional) name of resource namesapce (optional) specific namespace for the kind resource to link to hideIcon (optional) flag to hide the icon badge title (optional) title for the link object (not displayed) dataTest (optional) identifier for testing onClick (optional) callback function for when component is clicked truncate (optional) flag to truncate the link if too long ResourceIcon Component that creates an icon badge for a specific resource type. Example <ResourceIcon kind="Pod"/> Parameter Name Description kind (optional) the kind of resource i.e. Pod, Deployment, Namespace groupVersionKind (optional) object with group, version, and kind className (optional) class style for component useK8sModel Hook that retrieves the k8s model for provided K8sGroupVersionKind from redux. It returns an array with the first item as k8s model and second item as inFlight status. Example const Component: React.FC = () => { const [model, inFlight] = useK8sModel({ group: 'app'; version: 'v1'; kind: 'Deployment' }); return ... } Parameter Name Description groupVersionKind group, version, kind of k8s resource K8sGroupVersionKind is preferred alternatively can pass reference for group, version, kind which is deprecated, i.e, group/version/kind (GVK) K8sResourceKindReference. useK8sModels Hook that retrieves all current k8s models from redux. It returns an array with the first item as the list of k8s model and second item as inFlight status. Example const Component: React.FC = () => { const [models, inFlight] = UseK8sModels(); return ... } useK8sWatchResource Hook that retrieves the k8s resource along with status for loaded and error. It returns an array with first item as resource(s), second item as loaded status and third item as error state if any. Example const Component: React.FC = () => { const watchRes = { ... } const [data, loaded, error] = useK8sWatchResource(watchRes) return ... } Parameter Name Description initResource options needed to watch for resource. useK8sWatchResources Hook that retrieves the k8s resources along with their respective status for loaded and error. It returns a map where keys are as provided in initResouces and value has three properties data, loaded and error. Example const Component: React.FC = () => { const watchResources = { 'deployment': {...}, 'pod': {...} ... } const {deployment, pod} = useK8sWatchResources(watchResources) return ... } Parameter Name Description initResources Resources must be watched as key-value pair, wherein key will be unique to resource and value will be options needed to watch for the respective resource. consoleFetch A custom wrapper around fetch that adds console specific headers and allows for retries and timeouts.It also validates the response status code and throws appropriate error or logs out the user if required. It returns a promise that resolves to the response. Parameter Name Description url The URL to fetch options The options to pass to fetch timeout The timeout in milliseconds consoleFetchJSON A custom wrapper around fetch that adds console specific headers and allows for retries and timeouts. It also validates the response status code and throws appropriate error or logs out the user if required. It returns the response as a JSON object. Uses consoleFetch internally. It returns a promise that resolves to the response as JSON object. Parameter Name Description url The URL to fetch method The HTTP method to use. Defaults to GET options The options to pass to fetch timeout The timeout in milliseconds cluster The name of the cluster to make the request to. Defaults to the active cluster the user has selected consoleFetchText A custom wrapper around fetch that adds console specific headers and allows for retries and timeouts. It also validates the response status code and throws appropriate error or logs out the user if required. It returns the response as a text. Uses consoleFetch internally. It returns a promise that resolves to the response as text. Parameter Name Description url The URL to fetch options The options to pass to fetch timeout The timeout in milliseconds cluster The name of the cluster to make the request to. Defaults to the active cluster the user has selected getConsoleRequestHeaders A function that creates impersonation and multicluster related headers for API requests using current redux state. It returns an object containing the appropriate impersonation and clustr requst headers, based on redux state. Parameter Name Description targetCluster Override the current active cluster with the provided targetCluster k8sGetResource It fetches a resource from the cluster, based on the provided options. If the name is provided it returns one resource else it returns all the resources matching the model. It returns a promise that resolves to the response as JSON object with a resource if the name is providedelse it returns all the resources matching the model. In case of failure, the promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pairs in the map options.model k8s model options.name The name of the resource, if not provided then it will look for all the resources matching the model. options.ns The namespace to look into, should not be specified for cluster-scoped resources. options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. options.requestInit The fetch init object to use. This can have request headers, method, redirect, etc. See Interface RequestInit for more. k8sCreateResource It creates a resource in the cluster, based on the provided options. It returns a promise that resolves to the response of the resource created. In case of failure promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pairs in the map options.model k8s model options.data Payload for the resource to be created options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. k8sUpdateResource It updates the entire resource in the cluster, based on providedoptions. When a client needs to replace an existing resource entirely, they can use k8sUpdate. Alternatively can use k8sPatch to perform the partial update. It returns a promise that resolves to the response of the resource updated. In case of failure promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pair in the map options.model k8s model options.data Payload for the k8s resource to be updated options.ns Namespace to look into, it should not be specified for cluster-scoped resources. options.name Resource name to be updated. options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. k8sPatchResource It patches any resource in the cluster, based on provided options. When a client needs to perform the partial update, they can use k8sPatch. Alternatively can use k8sUpdate to replace an existing resource entirely. See Data Tracker for more. It returns a promise that resolves to the response of the resource patched. In case of failure promise gets rejected with HTTP error response. Parameter Name Description options Which are passed as key-value pairs in the map. options.model k8s model options.resource The resource to be patched. options.data Only the data to be patched on existing resource with the operation, path, and value. options.path Appends as subpath if provided. options.queryParams The query parameters to be included in the URL. k8sDeleteResource It deletes resources from the cluster, based on the provided model, resource. The garbage collection works based on Foreground | Background can be configured with propagationPolicy property in provided model or passed in json. It returns a promise that resolves to the response of kind Status. In case of failure promise gets rejected with HTTP error response. Example kind: 'DeleteOptions', apiVersion: 'v1', propagationPolicy Parameter Name Description options Which are passed as key-value pair in the map. options.model k8s model options.resource The resource to be deleted. options.path Appends as subpath if provided options.queryParams The query parameters to be included in the URL. options.requestInit The fetch init object to use. This can have request headers, method, redirect, etc. See Interface RequestInit for more. options.json Can control garbage collection of resources explicitly if provided else will default to model's "propagationPolicy". k8sListResource Lists the resources as an array in the cluster, based on provided options. It returns a promise that resolves to the response. Parameter Name Description options Which are passed as key-value pairs in the map options.model k8s model options.queryParams The query parameters to be included in the URL and can pass label selector's as well with key "labelSelector". options.requestInit The fetch init object to use. This can have request headers, method, redirect, etc. See Interface RequestInit for more. k8sListResourceItems Same interface as k8sListResource but returns the sub items. It returns the apiVersion for the model, i.e., group/version . getAPIVersionForModel Provides apiVersion for a k8s model. Parameter Name Description model k8s model getGroupVersionKindForResource Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" will be returned. If the resource has an invalid apiVersion, then it will throw an Error. Parameter Name Description resource k8s resource getGroupVersionKindForModel Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" will be returned. Parameter Name Description model k8s model StatusPopupSection Component that shows the status in a popup window. Helpful component for building console.dashboards/overview/health/resource extensions. Example <StatusPopupSection firstColumn={ <> <span>{title}</span> <span className="text-secondary"> My Example Item </span> </> } secondColumn='Status' > Parameter Name Description firstColumn values for first column of popup secondColumn (optional) values for second column of popup children (optional) children for the popup StatusPopupItem Status element used in status popup; used in StatusPopupSection . Example <StatusPopupSection firstColumn='Example' secondColumn='Status' > <StatusPopupItem icon={healthStateMapping[MCGMetrics.state]?.icon}> Complete </StatusPopupItem> <StatusPopupItem icon={healthStateMapping[RGWMetrics.state]?.icon}> Pending </StatusPopupItem> </StatusPopupSection> Parameter Name Description value (optional) text value to display icon (optional) icon to display children child elements Overview Creates a wrapper component for a dashboard. Example <Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview> Parameter Name Description className (optional) style class for div children (optional) elements of the dashboard OverviewGrid Creates a grid of card elements for a dashboard; used within Overview . Example <Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview> Parameter Name Description mainCards cards for grid leftCards (optional) cards for left side of grid rightCards (optional) cards for right side of grid InventoryItem Creates an inventory card item. Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description children elements to render inside the item InventoryItemTitle Creates a title for an inventory card item; used within InventoryItem . Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description children elements to render inside the title InventoryItemBody Creates the body of an inventory card; used within InventoryCard and can be used with InventoryTitle . Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description children elements to render inside the Inventory Card or title error elements of the div InventoryItemStatus Creates a count and icon for an inventory card with optional link address; used within InventoryItemBody Example return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> ) Parameter Name Description count count for display icon icon for display linkTo (optional) link address InventoryItemLoading Creates a skeleton container for when an inventory card is loading; used with InventoryItem and related components Example if (loadError) { title = <Link to={workerNodesLink}>{t('Worker Nodes')}</Link>; } else if (!loaded) { title = <><InventoryItemLoading /><Link to={workerNodesLink}>{t('Worker Nodes')}</Link></>; } return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> </InventoryItem> ) useFlag Hook that returns the given feature flag from FLAGS redux state. It returns the boolean value of the requested feature flag or undefined. Parameter Name Description flag The feature flag to return CodeEditor A basic lazy loaded Code editor with hover help and completion. Example <React.Suspense fallback={<LoadingBox />}> <CodeEditor value={code} language="yaml" /> </React.Suspense> Parameter Name Description value String representing the yaml code to render. language String representing the language of the editor. options Monaco editor options. For more details, please, visit Interface IStandAloneEditorConstructionOptions . minHeight Minimum editor height in valid CSS height values. showShortcuts Boolean to show shortcuts on top of the editor. toolbarLinks Array of ReactNode rendered on the toolbar links section on top of the editor. onChange Callback for on code change event. onSave Callback called when the command CTRL / CMD + S is triggered. ref React reference to { editor?: IStandaloneCodeEditor } . Using the editor property, you are able to access to all methods to control the editor. For more information, visit Interface IStandaloneCodeEditor . ResourceYAMLEditor A lazy loaded YAML editor for Kubernetes resources with hover help and completion. The component use the YAMLEditor and add on top of it more functionality likeresource update handling, alerts, save, cancel and reload buttons, accessibility and more. Unless onSave callback is provided, the resource update is automatically handled.It should be wrapped in a React.Suspense component. Example <React.Suspense fallback={<LoadingBox />}> <ResourceYAMLEditor initialResource={resource} header="Create resource" onSave={(content) => updateResource(content)} /> </React.Suspense> Parameter Name Description initialResource YAML/Object representing a resource to be shown by the editor. This prop is used only during the initial render header Add a header on top of the YAML editor onSave Callback for the Save button. Passing it will override the default update performed on the resource by the editor ResourceEventStream A component to show events related to a particular resource. Example const [resource, loaded, loadError] = useK8sWatchResource(clusterResource); return <ResourceEventStream resource={resource} /> Parameter Name Description resource An object whose related events should be shown. usePrometheusPoll Sets up a poll to Prometheus for a single query. It returns a tuple containing the query response, a boolean flag indicating whether the response has completed, and any errors encountered during the request or post-processing of the request. Parameter Name Description {PrometheusEndpoint} props.endpoint one of the PrometheusEndpoint (label, query, range, rules, targets) {string} [props.query] (optional) Prometheus query string. If empty or undefined, polling is not started. {number} [props.delay] (optional) polling delay interval (ms) {number} [props.endTime] (optional) for QUERY_RANGE enpoint, end of the query range {number} [props.samples] (optional) for QUERY_RANGE enpoint {number} [options.timespan] (optional) for QUERY_RANGE enpoint {string} [options.namespace] (optional) a search param to append {string} [options.timeout] (optional) a search param to append Timestamp A component to render timestamp. The timestamps are synchronized between invidual instances of the Timestamp component. The provided timestamp is formatted according to user locale. Parameter Name Description timestamp the timestamp to render. Format is expected to be ISO 8601 (used by Kubernetes), epoch timestamp, or an instance of a Date. simple render simple version of the component omitting icon and tooltip. omitSuffix formats the date ommiting the suffix. className additional class name for the component. useModal A hook to launch Modals. Example const context: AppPage: React.FC = () => {<br/> const [launchModal] = useModal();<br/> const onClick = () => launchModal(ModalComponent);<br/> return (<br/> <Button onClick={onClick}>Launch a Modal</Button><br/> )<br/>}<br/>` ActionServiceProvider Component that allows to receive contributions from other plugins for the console.action/provider extension type. Example const context: ActionContext = { 'a-context-id': { dataFromDynamicPlugin } }; ... <ActionServiceProvider context={context}> {({ actions, options, loaded }) => loaded && ( <ActionMenu actions={actions} options={options} variant={ActionMenuVariant.DROPDOWN} /> ) } </ActionServiceProvider> Parameter Name Description context Object with contextId and optional plugin data NamespaceBar A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and will be rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources. Example const logNamespaceChange = (namespace) => console.log(`New namespace: USD{namespace}`); ... <NamespaceBar onNamespaceChange={logNamespaceChange}> <NamespaceBarApplicationSelector /> </NamespaceBar> <Page> ... Parameter Name Description onNamespaceChange (optional) A function that is executed when a namespace option is selected. It accepts the new namespace in the form of a string as its only argument. The active namespace is updated automatically when an option is selected, but additional logic can be applied via this function. When the namespace is changed, the namespace parameter in the URL will be changed from the namespace to the newly selected namespace. isDisabled (optional) A boolean flag that disables the namespace dropdown if set to true. This option only applies to the namespace dropdown and has no effect on child components. children (optional) Additional elements to be rendered inside the toolbar to the right of the namespace dropdown. ErrorBoundaryFallbackPage Creates full page ErrorBoundaryFallbackPage component to display the "Oh no! Something went wrong." message along with the stack trace and other helpful debugging information. This is to be used inconjunction with an component. Example //in ErrorBoundary component return ( if (this.state.hasError) { return <ErrorBoundaryFallbackPage errorMessage={errorString} componentStack={componentStackString} stack={stackTraceString} title={errorString}/>; } return this.props.children; ) Parameter Name Description errorMessage text description of the error message componentStack component trace of the exception stack stack trace of the exception title title to render as the header of the error boundary page QueryBrowser A component that renders a graph of the results from a Prometheus PromQL query along with controls for interacting with the graph. Example <QueryBrowser defaultTimespan={15 * 60 * 1000} namespace={namespace} pollInterval={30 * 1000} queries={[ 'process_resident_memory_bytes{job="console"}', 'sum(irate(container_network_receive_bytes_total[6h:5m])) by (pod)', ]} /> Parameter Name Description customDataSource (optional) Base URL of an API endpoint that handles PromQL queries. If provided, this is used instead of the default API for fetching data. defaultSamples (optional) The default number of data samples plotted for each data series. If there are many data series, QueryBrowser might automatically pick a lower number of data samples than specified here. defaultTimespan (optional) The default timespan for the graph in milliseconds - defaults to 1,800,000 (30 minutes). disabledSeries (optional) Disable (don't display) data series with these exact label / value pairs. disableZoom (optional) Flag to disable the graph zoom controls. filterLabels (optional) Optionally filter the returned data series to only those that match these label / value pairs. fixedEndTime (optional) Set the end time for the displayed time range rather than showing data up to the current time. formatSeriesTitle (optional) Function that returns a string to use as the title for a single data series. GraphLink (optional) Component for rendering a link to another page (for example getting more information about this query). hideControls (optional) Flag to hide the graph controls for changing the graph timespan, and so on. isStack (optional) Flag to display a stacked graph instead of a line graph. If showStackedControl is set, it will still be possible for the user to switch to a line graph. namespace (optional) If provided, data is only returned for this namespace (only series that have this namespace label). onZoom (optional) Callback called when the graph is zoomed. pollInterval (optional) If set, determines how often the graph is updated to show the latest data (in milliseconds). queries Array of PromQL queries to run and display the results in the graph. showLegend (optional) Flag to enable displaying a legend below the graph. showStackedControl Flag to enable displaying a graph control for switching between stacked graph mode and line graph mode. timespan (optional) The timespan that should be covered by the graph in milliseconds. units (optional) Units to display on the Y-axis and in the tooltip. useAnnotationsModal A hook that provides a callback to launch a modal for editing Kubernetes resource annotations. Example const PodAnnotationsButton = ({ pod }) => { const { t } = useTranslation(); const launchAnnotationsModal = useAnnotationsModal<PodKind>(pod); return <button onClick={launchAnnotationsModal}>{t('Edit Pod Annotations')}</button> } Parameter Name Description resource The resource to edit annotations for an object of K8sResourceCommon type. Returns A function which will launch a modal for editing a resource's annotations. useDeleteModal A hook that provides a callback to launch a modal for deleting a resource. Example const DeletePodButton = ({ pod }) => { const { t } = useTranslation(); const launchDeleteModal = useDeleteModal<PodKind>(pod); return <button onClick={launchDeleteModal}>{t('Delete Pod')}</button> } Parameter Name Description resource The resource to delete. redirectTo (optional) A location to redirect to after deleting the resource. message (optional) A message to display in the modal. btnText (optional) The text to display on the delete button. deleteAllResources (optional) A function to delete all resources of the same kind. Returns A function which will launch a modal for deleting a resource. useLabelsModel A hook that provides a callback to launch a modal for editing Kubernetes resource labels. Example const PodLabelsButton = ({ pod }) => { const { t } = useTranslation(); const launchLabelsModal = useLabelsModal<PodKind>(pod); return <button onClick={launchLabelsModal}>{t('Edit Pod Labels')}</button> } Parameter Name Description resource The resource to edit labels for, an object of K8sResourceCommon type. Returns A function which will launch a modal for editing a resource's labels. useActiveNamespace Hook that provides the currently active namespace and a callback for setting the active namespace. Example const Component: React.FC = (props) => { const [activeNamespace, setActiveNamespace] = useActiveNamespace(); return <select value={activeNamespace} onChange={(e) => setActiveNamespace(e.target.value)} > { // ...namespace options } </select> } Returns A tuple containing the current active namespace and setter callback. PerspectiveContext Deprecated: Use the provided usePerspectiveContext instead. Creates the perspective context. Parameter Name Description PerspectiveContextType object with active perspective and setter useAccessReviewAllowed Deprecated: Use useAccessReview from @console/dynamic-plugin-sdk instead. Hook that provides allowed status about user access to a given resource. It returns the isAllowed boolean value. Parameter Name Description resourceAttributes resource attributes for access review impersonate impersonation details useSafetyFirst Deprecated: This hook is not related to console functionality. Hook that ensures a safe asynchronnous setting of React state in case a given component could be unmounted. It returns an array with a pair of state value and its set function. Parameter Name Description initialState initial state value YAMLEditor Deprecated: A basic lazy loaded YAML editor with hover help and completion. Example <React.Suspense fallback={<LoadingBox />}> <YAMLEditor value={code} /> </React.Suspense> Parameter Name Description value String representing the yaml code to render. options Monaco editor options. minHeight Minimum editor height in valid CSS height values. showShortcuts Boolean to show shortcuts on top of the editor. toolbarLinks Array of ReactNode rendered on the toolbar links section on top of the editor. onChange Callback for on code change event. onSave Callback called when the command CTRL / CMD + S is triggered. ref React reference to { editor?: IStandaloneCodeEditor } . Using the editor property, you are able to access to all methods to control the editor. 7.5.3. Troubleshooting your dynamic plugin Refer to this list of troubleshooting tips if you run into issues loading your plugin. Verify that you have enabled your plugin in the console Operator configuration and your plugin name is the output by running the following command: USD oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}' Verify the enabled plugins on the status card of the Overview page in the Administrator perspective. You must refresh your browser if the plugin was recently enabled. Verify your plugin service is healthy by: Verifying your plugin pod status is running and your containers are ready. Verifying the service label selector matches the pod and the target port is correct. Curl the plugin-manifest.json from the service in a terminal on the console pod or another pod on the cluster. Verify your ConsolePlugin resource name ( consolePlugin.name ) matches the plugin name used in package.json . Verify your service name, namespace, port, and path are declared correctly in the ConsolePlugin resource. Verify your plugin service uses HTTPS and service serving certificates. Verify any certificates or connection errors in the console pod logs. Verify the feature flag your plugin relys on is not disabled. Verify your plugin does not have any consolePlugin.dependencies in package.json that are not met. This can include console version dependencies or dependencies on other plugins. Filter the JS console in your browser for your plugin's name to see messages that are logged. Verify there are no typos in the nav extension perspective or section IDs. Your plugin may be loaded, but nav items missing if IDs are incorrect. Try navigating to a plugin page directly by editing the URL. Verify there are no network policies that are blocking traffic from the console pod to your plugin service. If necessary, adjust network policies to allow console pods in the openshift-console namespace to make requests to your service. Verify the list of dynamic plugins to be loaded in your browser in the Console tab of the developer tools browser. Evaluate window.SERVER_FLAGS.consolePlugins to see the dynamic plugin on the Console frontend. Additional resources Understanding service serving certificates
[ "conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; };", "yarn install", "yarn run start", "oc login", "yarn run start-console", "docker build -t quay.io/my-repositroy/my-plugin:latest .", "docker run -it --rm -d -p 9001:80 quay.io/my-repository/my-plugin:latest", "docker push quay.io/my-repository/my-plugin:latest", "helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location", "plugin: name: \"\" description: \"\" image: \"\" imagePullPolicy: IfNotPresent replicas: 2 port: 9443 securityContext: enabled: true podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi basePath: / certificateSecretName: \"\" serviceAccount: create: true annotations: {} name: \"\" patcherServiceAccount: create: true annotations: {} name: \"\" jobs: patchConsoles: enabled: true image: \"registry.redhat.io/openshift4/ose-tools-rhel8@sha256:e44074f21e0cca6464e50cb6ff934747e0bd11162ea01d522433a1a1ae116103\" podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi", "\"consolePlugin\": { \"name\": \"my-plugin\", 1 \"version\": \"0.0.1\", 2 \"displayName\": \"My Plugin\", 3 \"description\": \"Enjoy this shiny, new console plugin!\", 4 \"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\" }, \"dependencies\": { \"@console/pluginAPI\": \"/*\" } }", "{ \"type\": \"console.tab/horizontalNav\", \"properties\": { \"page\": { \"name\": \"Example Tab\", \"href\": \"example\" }, \"model\": { \"group\": \"core\", \"version\": \"v1\", \"kind\": \"Pod\" }, \"component\": { \"USDcodeRef\": \"ExampleTab\" } } }", "\"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\", \"ExampleTab\": \"./components/ExampleTab\" }", "import * as React from 'react'; export default function ExampleTab() { return ( <p>This is a custom tab added to a resource using a dynamic plugin.</p> ); }", "helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location", "const Component: React.FC = (props) => { const [activePerspective, setActivePerspective] = useActivePerspective(); return <select value={activePerspective} onChange={(e) => setActivePerspective(e.target.value)} > { // ...perspective options } </select> }", "<GreenCheckCircleIcon title=\"Healthy\" />", "<RedExclamationCircleIcon title=\"Failed\" />", "<YellowExclamationTriangleIcon title=\"Warning\" />", "<BlueInfoCircleIcon title=\"Info\" />", "<ErrorStatus title={errorMsg} />", "<InfoStatus title={infoMsg} />", "<ProgressStatus title={progressMsg} />", "<SuccessStatus title={successMsg} />", "const [navItemExtensions, navItemsResolved] = useResolvedExtensions<NavItem>(isNavItem); // process adapted extensions and render your component", "const HomePage: React.FC = (props) => { const page = { href: '/home', name: 'Home', component: () => <>Home</> } return <HorizontalNav match={props.match} pages={[page]} /> }", "const MachineList: React.FC<MachineListProps> = (props) => { return ( <VirtualizedTable<MachineKind> {...props} aria-label='Machines' columns={getMachineColumns} Row={getMachineTableRow} /> ); }", "const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs }) => { return ( <> <TableData id={columns[0].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Pod\" name={obj.metadata.name} namespace={obj.metadata.namespace} /> </TableData> <TableData id={columns[1].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Namespace\" name={obj.metadata.namespace} /> </TableData> </> ); };", "// See implementation for more details on TableColumn type const [activeColumns, userSettingsLoaded] = useActiveColumns({ columns, showNamespaceOverride: false, columnManagementID, }); return userSettingsAreLoaded ? <VirtualizedTable columns={activeColumns} {...otherProps} /> : null", "const exampleList: React.FC = () => { return ( <> <ListPageHeader title=\"Example List Page\"/> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreate groupVersionKind=\"Pod\">Create Pod</ListPageCreate> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateLink to={'/link/to/my/page'}>Create Item</ListPageCreateLink> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateButton createAccessReview={access}>Create Pod</ListPageCreateButton> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { const items = { SAVE: 'Save', DELETE: 'Delete', } return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateDropdown createAccessReview={access} items={items}>Actions</ListPageCreateDropdown> </ListPageHeader> </> ); };", "// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )", "// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )", "<ResourceLink kind=\"Pod\" name=\"testPod\" title={metadata.uid} />", "<ResourceIcon kind=\"Pod\"/>", "const Component: React.FC = () => { const [model, inFlight] = useK8sModel({ group: 'app'; version: 'v1'; kind: 'Deployment' }); return }", "const Component: React.FC = () => { const [models, inFlight] = UseK8sModels(); return }", "const Component: React.FC = () => { const watchRes = { } const [data, loaded, error] = useK8sWatchResource(watchRes) return }", "const Component: React.FC = () => { const watchResources = { 'deployment': {...}, 'pod': {...} } const {deployment, pod} = useK8sWatchResources(watchResources) return }", "<StatusPopupSection firstColumn={ <> <span>{title}</span> <span className=\"text-secondary\"> My Example Item </span> </> } secondColumn='Status' >", "<StatusPopupSection firstColumn='Example' secondColumn='Status' > <StatusPopupItem icon={healthStateMapping[MCGMetrics.state]?.icon}> Complete </StatusPopupItem> <StatusPopupItem icon={healthStateMapping[RGWMetrics.state]?.icon}> Pending </StatusPopupItem> </StatusPopupSection>", "<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>", "<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "if (loadError) { title = <Link to={workerNodesLink}>{t('Worker Nodes')}</Link>; } else if (!loaded) { title = <><InventoryItemLoading /><Link to={workerNodesLink}>{t('Worker Nodes')}</Link></>; } return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> </InventoryItem> )", "<React.Suspense fallback={<LoadingBox />}> <CodeEditor value={code} language=\"yaml\" /> </React.Suspense>", "<React.Suspense fallback={<LoadingBox />}> <ResourceYAMLEditor initialResource={resource} header=\"Create resource\" onSave={(content) => updateResource(content)} /> </React.Suspense>", "const [resource, loaded, loadError] = useK8sWatchResource(clusterResource); return <ResourceEventStream resource={resource} />", "const context: AppPage: React.FC = () => {<br/> const [launchModal] = useModal();<br/> const onClick = () => launchModal(ModalComponent);<br/> return (<br/> <Button onClick={onClick}>Launch a Modal</Button><br/> )<br/>}<br/>`", "const context: ActionContext = { 'a-context-id': { dataFromDynamicPlugin } }; <ActionServiceProvider context={context}> {({ actions, options, loaded }) => loaded && ( <ActionMenu actions={actions} options={options} variant={ActionMenuVariant.DROPDOWN} /> ) } </ActionServiceProvider>", "const logNamespaceChange = (namespace) => console.log(`New namespace: USD{namespace}`); <NamespaceBar onNamespaceChange={logNamespaceChange}> <NamespaceBarApplicationSelector /> </NamespaceBar> <Page>", "//in ErrorBoundary component return ( if (this.state.hasError) { return <ErrorBoundaryFallbackPage errorMessage={errorString} componentStack={componentStackString} stack={stackTraceString} title={errorString}/>; } return this.props.children; )", "<QueryBrowser defaultTimespan={15 * 60 * 1000} namespace={namespace} pollInterval={30 * 1000} queries={[ 'process_resident_memory_bytes{job=\"console\"}', 'sum(irate(container_network_receive_bytes_total[6h:5m])) by (pod)', ]} />", "const PodAnnotationsButton = ({ pod }) => { const { t } = useTranslation(); const launchAnnotationsModal = useAnnotationsModal<PodKind>(pod); return <button onClick={launchAnnotationsModal}>{t('Edit Pod Annotations')}</button> }", "const DeletePodButton = ({ pod }) => { const { t } = useTranslation(); const launchDeleteModal = useDeleteModal<PodKind>(pod); return <button onClick={launchDeleteModal}>{t('Delete Pod')}</button> }", "const PodLabelsButton = ({ pod }) => { const { t } = useTranslation(); const launchLabelsModal = useLabelsModal<PodKind>(pod); return <button onClick={launchLabelsModal}>{t('Edit Pod Labels')}</button> }", "const Component: React.FC = (props) => { const [activeNamespace, setActiveNamespace] = useActiveNamespace(); return <select value={activeNamespace} onChange={(e) => setActiveNamespace(e.target.value)} > { // ...namespace options } </select> }", "<React.Suspense fallback={<LoadingBox />}> <YAMLEditor value={code} /> </React.Suspense>", "oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/web_console/dynamic-plugins
Chapter 19. Deploying custom code to Data Grid
Chapter 19. Deploying custom code to Data Grid Add custom code, such as scripts and event listeners, to your Data Grid clusters. Before you can deploy custom code to Data Grid clusters, you need to make it available. To do this you can copy artifacts from a persistent volume (PV), download artifacts from an HTTP or FTP server, or use both methods. 19.1. Copying code artifacts to Data Grid clusters Adding your artifacts to a persistent volume (PV) and then copy them to Data Grid pods. This procedure explains how to use a temporary pod that mounts a persistent volume claim (PVC) that: Lets you add code artifacts to the PV (perform a write operation). Allows Data Grid pods to load code artifacts from the PV (perform a read operation). To perform these read and write operations, you need certain PV access modes. However, support for different PVC access modes is platform dependent. It is beyond the scope of this document to provide instructions for creating PVCs with different platforms. For simplicity, the following procedure shows a PVC with the ReadWriteMany access mode. In some cases only the ReadOnlyMany or ReadWriteOnce access modes are available. You can use a combination of those access modes by reclaiming and reusing PVCs with the same spec.volumeName . Note Using ReadWriteOnce access mode results in all Data Grid pods in a cluster being scheduled on the same OpenShift node. Procedure Change to the namespace for your Data Grid cluster. Create a PVC for your custom code artifacts, for example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: datagrid-libs spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi Apply your PVC. Create a pod that mounts the PVC, for example: apiVersion: v1 kind: Pod metadata: name: datagrid-libs-pod spec: securityContext: fsGroup: 2000 volumes: - name: lib-pv-storage persistentVolumeClaim: claimName: datagrid-libs containers: - name: lib-pv-container image: registry.redhat.io/datagrid/datagrid-8-rhel8:8.4 volumeMounts: - mountPath: /tmp/libs name: lib-pv-storage Add the pod to the Data Grid namespace and wait for it to be ready. Copy your code artifacts to the pod so that they are loaded into the PVC. For example to copy code artifacts from a local libs directory, do the following: Delete the pod. Specify the persistent volume with spec.dependencies.volumeClaimName in your Infinispan CR and then apply the changes. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: volumeClaimName: datagrid-libs service: type: DataGrid Note If you update your custom code on the persistent volume, you must restart the Data Grid cluster so it can load the changes. Additional resources Configuring persistent storage Persistent Volumes Access Modes How to manually reclaim and reuse OpenShift Persistent volumes that are "Released" (Red Hat Knowledgebase) 19.2. Downloading code artifacts Add your artifacts to an HTTP or FTP server so that Data Grid Operator downloads them to the {lib_path} directory on each Data Grid node. When downloading files, Data Grid Operator can automatically detect the file type. Data Grid Operator also extracts archived files, such as zip or tgz , to the filesystem after the download completes. You can also download Maven artifacts using the groupId:artifactId:version format, for example org.postgresql:postgresql:42.3.1 . Note Each time Data Grid Operator creates a Data Grid node it downloads the artifacts to the node. Prerequisites Host your code artifacts on an HTTP or FTP server or publish them to a maven repository. Procedure Add the spec.dependencies.artifacts field to your Infinispan CR. Do one of the following: Specify the location of the file to download via HTTP or FTP as the value of the spec.dependencies.artifacts.url field. Provide the Maven artifact to download with the groupId:artifactId:version format as the value of the spec.dependencies.artifacts.maven field. Optionally specify a checksum to verify the integrity of the download with the spec.dependencies.artifacts.hash field. The hash field requires a value is in the format of <algorithm>:<checksum> where <algorithm> is sha1|sha224|sha256|sha384|sha512|md5 . apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: artifacts: - url: http://example.com:8080/path hash: sha256:596408848b56b5a23096baa110cd8b633c9a9aef2edd6b38943ade5b4edcd686 service: type: DataGrid Apply the changes.
[ "project rhdg-namespace", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: datagrid-libs spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi", "apply -f datagrid-libs.yaml", "apiVersion: v1 kind: Pod metadata: name: datagrid-libs-pod spec: securityContext: fsGroup: 2000 volumes: - name: lib-pv-storage persistentVolumeClaim: claimName: datagrid-libs containers: - name: lib-pv-container image: registry.redhat.io/datagrid/datagrid-8-rhel8:8.4 volumeMounts: - mountPath: /tmp/libs name: lib-pv-storage", "apply -f datagrid-libs-pod.yaml wait --for=condition=ready --timeout=2m pod/datagrid-libs-pod", "cp --no-preserve=true libs datagrid-libs-pod:/tmp/", "delete pod datagrid-libs-pod", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: volumeClaimName: datagrid-libs service: type: DataGrid", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: artifacts: - url: http://example.com:8080/path hash: sha256:596408848b56b5a23096baa110cd8b633c9a9aef2edd6b38943ade5b4edcd686 service: type: DataGrid" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/deploying-code
Chapter 4. Diskless Environments
Chapter 4. Diskless Environments Some networks require multiple systems with the same configuration. They also require that these systems be easy to reboot, upgrade, and manage. One solution is to use a diskless environment in which most of the operating system, which can be read-only, is shared from a central server between the clients. The individual clients have their own directories on the central server for the rest of the operating system, which must be read/write. Each time the client boots, it mounts most of the OS from the NFS server as read-only and another directory as read-write. Each client has its own read-write directory so that one client can not affect the others. The following steps are necessary to configure Red Hat Enterprise Linux to run on a diskless client: Install Red Hat Enterprise Linux on a system so that the files can be copied to the NFS server. (Refer to the Installation Guide for details.) Any software to be used on the clients must be installed on this system and the busybox-anaconda package must be installed. Create a directory on the NFS server to contain the diskless environment such as /diskless/i386/RHEL4-AS/ . For example: This directory is referred to as the diskless directory . Create a subdirectory of this directory named root/ : Copy Red Hat Enterprise Linux from the client system to the server using rsync . For example: The length of this operation depends on the network connection speed as well as the size of the file system on the installed system. Depending on these factors, this operation may take a while. Start the tftp server Configure the DHCP server Finish creating the diskless environment as discussed in Section 4.2, "Finish Configuring the Diskless Environment" . Configure the diskless clients as discussed in Section 4.3, "Adding Hosts" . Configure each diskless client to boot via PXE and boot them. 4.1. Configuring the NFS Server The shared read-only part of the operating system is shared via NFS. Configure NFS to export the root/ and snapshot/ directories by adding them to /etc/exports . For example: Replace * with one of the hostname formats discussed in Section 21.3.2, "Hostname Formats" . Make the hostname declaration as specific as possible, so unwanted systems can not access the NFS mount. If the NFS service is not running, start it: If the NFS service is already running, reload the configuration file:
[ "mkdir -p /diskless/i386/RHEL4-AS/", "mkdir -p /diskless/i386/RHEL4-AS/root/", "rsync -a -e ssh installed-system.example.com:/ /diskless/i386/RHEL4-AS/root/", "/diskless/i386/RHEL4-AS/root/ *(ro,sync,no_root_squash) /diskless/i386/RHEL4-AS/snapshot/ *(rw,sync,no_root_squash)", "service nfs start", "service nfs reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/diskless_environments
F.10. Window Menu
F.10. Window Menu The Window menu shown below contains no Teiid Designer specific actions. See Eclipse Workbench documentation for details. Figure F.12. Window Menu The Preferences... action launches the Preferences dialog, which can be used to set preferences and default values for many features of Teiid Designer. Note These menu items may vary depending on your set of installed Eclipse features and plugins. To customize a perspective to include one or more Teiid Designer views, click the Show View > Other... action and expand the Teiid Designer category to show the available views. Figure F.13. Show View Dialog
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/window_menu
Chapter 92. EJB Component
Chapter 92. EJB Component Available as of Camel version 2.4 The ejb: component binds EJBs to Camel message exchanges. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ejb</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 92.1. URI format ejb:ejbName[?options] Where ejbName can be any string which is used to look up the EJB in the Application Server JNDI Registry 92.2. Options The EJB component supports 4 options, which are listed below. Name Description Default Type context (producer) The Context to use for looking up the EJBs Context properties (producer) Properties for creating javax.naming.Context if a context has not been configured. Properties cache (advanced) If enabled, Camel will cache the result of the first Registry look-up. Cache can be enabled if the bean in the Registry is defined as a singleton scope. Boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The EJB endpoint is configured using URI syntax: with the following path and query parameters: 92.2.1. Path Parameters (1 parameters): Name Description Default Type beanName Required Sets the name of the bean to invoke String 92.2.2. Query Parameters (5 parameters): Name Description Default Type method (producer) Sets the name of the method to invoke on the bean String cache (advanced) If enabled, Camel will cache the result of the first Registry look-up. Cache can be enabled if the bean in the Registry is defined as a singleton scope. Boolean multiParameterArray (advanced) Deprecated How to treat the parameters which are passed from the message body.true means the message body should be an array of parameters.. Deprecation note: This option is used internally by Camel, and is not intended for end users to use. Deprecation note: This option is used internally by Camel, and is not intended for end users to use. false boolean parameters (advanced) Used for configuring additional properties on the bean Map synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 92.3. Bean Binding How bean methods to be invoked are chosen (if they are not specified explicitly through the method parameter) and how parameter values are constructed from the Message are all defined by the Bean Binding mechanism which is used throughout all of the various Bean Integration mechanisms in Camel. 92.4. Examples In the following examples we use the Greater EJB which is defined as follows: GreaterLocal.java public interface GreaterLocal { String hello(String name); String bye(String name); } And the implementation GreaterImpl.java @Stateless public class GreaterImpl implements GreaterLocal { public String hello(String name) { return "Hello " + name; } public String bye(String name) { return "Bye " + name; } } 92.4.1. Using Java DSL In this example we want to invoke the hello method on the EJB. Since this example is based on an unit test using Apache OpenEJB we have to set a JndiContext on the EJB component with the OpenEJB settings. @Override protected CamelContext createCamelContext() throws Exception { CamelContext answer = new DefaultCamelContext(); // enlist EJB component using the JndiContext EjbComponent ejb = answer.getComponent("ejb", EjbComponent.class); ejb.setContext(createEjbContext()); return answer; } private static Context createEjbContext() throws NamingException { // here we need to define our context factory to use OpenEJB for our testing Properties properties = new Properties(); properties.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.client.LocalInitialContextFactory"); return new InitialContext(properties); } Then we are ready to use the EJB in the Camel route: from("direct:start") // invoke the greeter EJB using the local interface and invoke the hello method .to("ejb:GreaterImplLocal?method=hello") .to("mock:result"); In a real application server In a real application server you most likely do not have to setup a JndiContext on the EJB component as it will create a default JndiContext on the same JVM as the application server, which usually allows it to access the JNDI registry and lookup the EJB s. However if you need to access a application server on a remote JVM or the likes, you have to prepare the properties beforehand. 92.4.2. Using Spring XML And this is the same example using Spring XML instead: Again since this is based on an unit test we need to setup the EJB component: <!-- setup Camel EJB component --> <bean id="ejb" class="org.apache.camel.component.ejb.EjbComponent"> <property name="properties" ref="jndiProperties"/> </bean> <!-- use OpenEJB context factory --> <p:properties id="jndiProperties"> <prop key="java.naming.factory.initial">org.apache.openejb.client.LocalInitialContextFactory</prop> </p:properties> Before we are ready to use EJB in the Camel routes: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="ejb:GreaterImplLocal?method=hello"/> <to uri="mock:result"/> </route> </camelContext> 92.5. See Also Configuring Camel Component Endpoint Getting Started Bean Bean Binding Bean Integration
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ejb</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "ejb:ejbName[?options]", "ejb:beanName", "public interface GreaterLocal { String hello(String name); String bye(String name); }", "@Stateless public class GreaterImpl implements GreaterLocal { public String hello(String name) { return \"Hello \" + name; } public String bye(String name) { return \"Bye \" + name; } }", "@Override protected CamelContext createCamelContext() throws Exception { CamelContext answer = new DefaultCamelContext(); // enlist EJB component using the JndiContext EjbComponent ejb = answer.getComponent(\"ejb\", EjbComponent.class); ejb.setContext(createEjbContext()); return answer; } private static Context createEjbContext() throws NamingException { // here we need to define our context factory to use OpenEJB for our testing Properties properties = new Properties(); properties.setProperty(Context.INITIAL_CONTEXT_FACTORY, \"org.apache.openejb.client.LocalInitialContextFactory\"); return new InitialContext(properties); }", "from(\"direct:start\") // invoke the greeter EJB using the local interface and invoke the hello method .to(\"ejb:GreaterImplLocal?method=hello\") .to(\"mock:result\");", "<!-- setup Camel EJB component --> <bean id=\"ejb\" class=\"org.apache.camel.component.ejb.EjbComponent\"> <property name=\"properties\" ref=\"jndiProperties\"/> </bean> <!-- use OpenEJB context factory --> <p:properties id=\"jndiProperties\"> <prop key=\"java.naming.factory.initial\">org.apache.openejb.client.LocalInitialContextFactory</prop> </p:properties>", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"ejb:GreaterImplLocal?method=hello\"/> <to uri=\"mock:result\"/> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ejb-component
Chapter 7. Known issues
Chapter 7. Known issues This section describes the known issues in Red Hat OpenShift Data Foundation 4.14. 7.1. Disaster recovery Failover action reports RADOS block device image mount failed on the pod with RPC error still in use Failing over a disaster recovery (DR) protected workload might result in pods using the volume on the failover cluster to be stuck in reporting RADOS block device (RBD) image is still in use. This prevents the pods from starting up for a long duration (upto several hours). ( BZ#2007376 ) Creating an application namespace for the managed clusters Application namespace needs to exist on RHACM managed clusters for disaster recovery (DR) related pre-deployment actions and hence is pre-created when an application is deployed at the RHACM hub cluster. However, if an application is deleted at the hub cluster and its corresponding namespace is deleted on the managed clusters, they reappear on the managed cluster. Workaround: openshift-dr maintains a namespace manifestwork resource in the managed cluster namespace at the RHACM hub. These resources need to be deleted after the application deletion. For example, as a cluster administrator, execute the following command on the hub cluster: oc delete manifestwork -n <managedCluster namespace> <drPlacementControl name>-<namespace>-ns-mw . ( BZ#2059669 ) ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release. ( BZ#2100920 ) Both the DRPCs protect all the persistent volume claims created on the same namespace The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field. This results in PVCs, that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies. ( BZ#2111163 ) MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups . This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs. Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors. ( BZ#2114573 ) Application is stuck in Relocating state during relocate Multicloud Object Gateway allowed multiple persistent volume (PV) objects of the same name or namespace to be added to the S3 store on the same path. Due to this, Ramen does not restore the PV because it detected multiple versions pointing to the same claimRef . Workaround: Use S3 CLI or equivalent to clean up the duplicate PV objects from the S3 store. Keep only the one that has a timestamp closer to the failover or relocate time. Result: The restore operation will proceed to completion and the failover or relocate operation proceeds to the step. ( BZ#2120201 ) Disaster recovery workloads remain stuck when deleted When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod . This might cause delay or failure in garbage collecting dependent DR resources such as the PVC , VolumeReplication , and VolumeReplicationGroup . It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected. Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected. ( BZ#2159791 ) Application failover hangs in FailingOver state when the managed clusters are on different versions of OpenShift Container Platform and OpenShift Data Foundation Disaster Recovery solution with OpenShift Data Foundation 4.14 protects and restores persistent volume claim (PVC) data in addition to the persistent volume (PV) data. If the primary cluster is on an older OpenShift Data Foundation version and the target cluster is updated to 4.14 then the failover will be stuck as the S3 store will not have the PVC data. Workaround: When upgrading the Disaster Recovery clusters, the primary cluster must be upgraded first and then the post-upgrade steps must be run. ( BZ#2214306 ) When DRPolicy is applied to multiple applications under same namespace, volume replication group is not created When a DRPlacementControl (DRPC) is created for applications that are co-located with other applications in the namespace, the DRPC has no label selector set for the applications. If any subsequent changes are made to the label selector, the validating admission webhook in the OpenShift Data Foundation Hub controller rejects the changes. Workaround: Until the admission webhook is changed to allow such changes, the DRPC validatingwebhookconfigurations can be patched to remove the webhook: ( BZ#2210762 ) Failover of apps from c1 to c2 cluster hang in FailingOver The failover action is not disabled by Ramen when data is not uploaded to the s3 store due to s3 store misconfiguration.This means the cluster data is not available on the failover cluster during the failover. Therefore, failover cannot be completed. Workaround: Inspect the ramen logs after initial deployment to insure there are no s3 configuration errors reported. ( BZ#2248723 ) Potential risk of data loss after hub recovery A potential data loss risk exists following hub recovery due to an eviction routine designed to clean up orphaned resources. This routine identifies and marks AppliedManifestWorks instances lacking corresponding ManifestWorks for collection. A hardcoded grace period of one hour is provided. After this period elapses, any resources associated with the AppliedManifestWork become subject to garbage collection. If the hub cluster fails to regenerate corresponding ManifestWorks within the initial one hour window, data loss could occur. This highlights the importance of promptly addressing any issues that might prevent the recreation of ManifestWorks post-hub recovery to minimize the risk of data loss. ( BZ-2252933 ) 7.1.1. DR upgrade This section describes the issues and workarounds related to upgrading Red Hat OpenShift Data Foundation from version 4.13 to 4.14 in disaster recovery environment. Incorrect value cached status.preferredDecision.ClusterNamespace When OpenShift Data Foundation is upgraded from version 4.13 to 4.14, the disaster recovery placement control (DRPC) might have incorrect value cached in status.preferredDecision.ClusterNamespace . As a result, the DRPC incorrectly enters the WaitForFencing PROGRESSION instead of detecting that the failover is already complete. The workload on the managed clusters is not affected by this issue. Workaround: To identify the affected DRPCs, check for any DRPC that is in the state FailedOver as CURRENTSTATE and are stuck in the WaitForFencing PROGRESSION. To clear the incorrect value edit the DRPC subresource and delete the line, status.PreferredCluster.ClusterNamespace : To verify the DRPC status, check if the PROGRESSION is in COMPLETED state and FailedOver as CURRENTSTATE. ( BZ#2215442 ) 7.2. Ceph Poor performance of the stretch clusters on CephFS Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters. ( BZ#1982116 ) SELinux relabelling issue with a very high number of files When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251 . ( Jira#3327 ) Ceph is inaccessible after crash or shutdown tests are run In a stretch cluster, when a monitor is revived and is in the probing stage for other monitors to receive the latest information such as MonitorMap or OSDMap , it is unable to enter stretch_mode at the time it is in the probing stage. This prevents it from correctly setting the elector's disallowed_leaders list. Assuming that the revived monitor actually has the best score, it will think that it is best fit to be a leader in the current election round and will cause the election phase of the monitors to get stuck because it will keep proposing itself and will keep getting rejected by the surviving monitors because of the disallowed_leaders list. This leads to the monitors getting stuck in election, and Ceph eventually becomes unresponsive. To workaround this issue, when stuck in election and Ceph becomes unresponsive, reset the Connectivity Scores of each monitor by using the command: If this doesn't work, restart the monitors one by one. Election will then be unstuck, monitors will be able to elect a leader, form a quorum, and Ceph will become responsive again. ( BZ#2241937 ) Ceph reports no active mgr after workload deployment After workload deployment, Ceph manager loses connectivity to MONs or is unable to respond to its liveness probe. This causes the ODF cluster status to report that there is "no active mgr". This causes multiple operations that use the Ceph manager for request processing to fail. For example, volume provisioning, creating CephFS snapshots, and others. To check the status of the ODF cluster, use the command oc get cephcluster -n openshift-storage . In the status output, the status.ceph.details.MGR_DOWN field will have the message "no active mgr" if your cluster has this issue. To workaround this issue, restart the Ceph manager pods using the following commands: After running these commands, the ODF cluster status reports a healthy cluster, with no warnings or errors regarding MGR_DOWN . ( BZ#2244873 ) CephBlockPool creation fails when custom deviceClass is used in StorageCluster Due to a known issue, CephBlockPool creation fails when custom deviceClass is used in StorageCluster. ( BZ#2248487 ) 7.3. CSI Driver Automatic flattening of snapshots does not work When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC. To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue. If you hit this issue, contact customer support to perform manual flattening of the final restore PVCs to continue to take volume snapshot or clone of the common parent PVC again. ( BZ#2232163 ) 7.4. OpenShift Data Foundation console Missing NodeStageVolume RPC call blocks new pods from going into Running state NodeStageVolume RPC call is not being issued blocking some pods from going into Running state. The new pods are stuck in Pending forever. To workaround this issue, scale down all the affected pods at once or do a node reboot. After applying the workaround, all pods should go into Running state. ( BZ#2244353 ) Backups are failing to transfer data In some situations, backups fail to transfer data, and snapshot PVC is stuck in Pending state. ( BZ#2248117 )
[ "oc patch validatingwebhookconfigurations vdrplacementcontrol.kb.io-lq2kz --type=json --patch='[{\"op\": \"remove\", \"path\": \"/webhooks\"}]'", "oc get drpc -o yaml", "oc edit --subresource=status drpc -n <namespace> <name>", "`ceph daemon mon.{name} connection scores reset`", "oc scale deployment -n openshift-storage rook-ceph-mgr-a --replicas=0", "oc scale deployment -n openshift-storage rook-ceph-mgr-a --replicas=1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/known-issues
10.4. Preparation for IBM Power Systems Servers
10.4. Preparation for IBM Power Systems Servers Important Ensure that the real-base boot parameter is set to c00000 , otherwise you might see errors such as: IBM Power Systems servers offer many options for partitioning, virtual or native devices, and consoles. If you are using a non-partitioned system, you do not need any pre-installation setup. For systems using the HVSI serial console, hook up your console to the T2 serial port. If using a partitioned system the steps to create the partition and start the installation are largely the same. You should create the partition at the HMC and assign some CPU and memory resources, as well as SCSI and Ethernet resources, which can be either virtual or native. The HMC create partition wizard steps you through the creation. For more information on creating the partition, see the Partitioning for Linux with an HMC PDF in the IBM Systems Hardware Information Center. If you are using virtual SCSI resources, rather than native SCSI, you must configure a 'link' to the virtual SCSI serving partition, and then configure the virtual SCSI serving partition itself. You create a 'link' between the virtual SCSI client and server slots using the HMC. You can configure a virtual SCSI server on either Virtual I/O Server (VIOS) or IBM i, depending on which model and options you have. If you are installing using Intel iSCSI Remote Boot, all attached iSCSI storage devices must be disabled. Otherwise, the installation will succeed but the installed system will not boot. For more information on using virtual devices, see the IBM Redbooks publication Virtualizing an Infrastructure with System p and Linux . Once you have your system configured, you need to Activate from the HMC or power it on. Depending on the type of installation, you need to configure SMS to correctly boot the system into the installation program.
[ "DEFAULT CATCH!, exception-handler=fff00300" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-planning-hardware-preparation-ppc
function::log
function::log Name function::log - Send a line to the common trace buffer. Synopsis Arguments msg The formatted message string. General Syntax log(msg:string) Description This function logs data. log sends the message immediately to staprun and to the bulk transport (relayfs) if it is being used. If the last character given is not a newline, then one is added. This function is not as effecient as printf and should be used only for urgent messages.
[ "function log(msg:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-log
Chapter 2. 20 March 2025
Chapter 2. 20 March 2025 This release Red Hat Ansible Lightspeed features the following enhancement: Ability to generate roles and view role explanations Role generation and viewing role explanations are now supported on the Red Hat Ansible Lightspeed cloud service. You can create roles within Ansible collections from the Ansible VS Code extension using a natural language interface in English. You can also view the explanations for new or existing roles. For more information, see Creating roles and viewing role explanations . Availability of REST API on on-premise deployments Platform administrators can now configure and use the Red Hat Ansible Lightspeed REST API to build a custom automation development and tooling workflow outside of VS Code. For more information, see Using the Ansible Lightspeed REST API and Ansible AI Connect. 1.0.0 (v1) in the API catalog. Removal of unnecessary settings from the initial configuration of Ansible VS Code extension The initial configuration of the Ansible VS Code extension now has fewer settings that are enabled by default. For more information, see Configuring the Ansible VS Code extension .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_release_notes/lightspeed-key-features-20march2025_lightspeed-release-notes
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1]
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1] Description OAuthAuthorizeToken describes an OAuth authorization token Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this token. codeChallenge string CodeChallenge is the optional code_challenge associated with this authorization code, as described in rfc7636 codeChallengeMethod string CodeChallengeMethod is the optional code_challenge_method associated with this authorization code, as described in rfc7636 expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. scopes array (string) Scopes is an array of the requested scopes. state string State data from request userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token. UserUID and UserName must both match for this token to be valid. 3.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthauthorizetokens DELETE : delete collection of OAuthAuthorizeToken GET : list or watch objects of kind OAuthAuthorizeToken POST : create an OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens GET : watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} DELETE : delete an OAuthAuthorizeToken GET : read the specified OAuthAuthorizeToken PATCH : partially update the specified OAuthAuthorizeToken PUT : replace the specified OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} GET : watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/oauth.openshift.io/v1/oauthauthorizetokens HTTP method DELETE Description delete collection of OAuthAuthorizeToken Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAuthorizeToken Table 3.3. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAuthorizeToken Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.2. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens HTTP method GET Description watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken HTTP method DELETE Description delete an OAuthAuthorizeToken Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAuthorizeToken Table 3.11. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAuthorizeToken Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAuthorizeToken Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.4. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken HTTP method GET Description watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/oauth_apis/oauthauthorizetoken-oauth-openshift-io-v1
Chapter 18. Upgrading Streams for Apache Kafka and Kafka
Chapter 18. Upgrading Streams for Apache Kafka and Kafka Upgrade your Kafka cluster with no downtime. Streams for Apache Kafka 2.7 supports and uses Apache Kafka version 3.7.0. Kafka 3.6.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7. You upgrade to the latest supported version of Kafka when you install the latest version of Streams for Apache Kafka. 18.1. Upgrade prerequisites Before you begin the upgrade process, make sure you are familiar with any upgrade changes described in the Streams for Apache Kafka 2.7 on Red Hat Enterprise Linux Release Notes . 18.2. Strategies for upgrading clients Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved. Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable. If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions. 18.3. Upgrading Kafka clusters Upgrade a KRaft-based Kafka cluster to a newer supported Kafka version and KRaft metadata version. You update the installation files, then configure and restart all Kafka nodes. After performing these steps, data is transmitted between the Kafka brokers according to the new metadata version. Warning When downgrading a KRaft-based Strimzi Kafka cluster to a lower version, like moving from 3.7.0 to 3.6.0, ensure that the metadata version used by the Kafka cluster is a version supported by the Kafka version you want to downgrade to. The metadata version for the Kafka version you are downgrading from must not be higher than the version you are downgrading to. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Streams for Apache Kafka is installed on each host , and the configuration files are available. You have downloaded the installation files . Procedure For each Kafka node in your Streams for Apache Kafka cluster, starting with controller nodes and then brokers, and one at a time: Download the Streams for Apache Kafka archive from the Streams for Apache Kafka software downloads page . Note If prompted, log in to your Red Hat account. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-bin.zip file. mkdir /tmp/kafka unzip amq-streams-<version>-bin.zip -d /tmp/kafka If running, stop the Kafka broker running on the host. /opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafka If you are running Kafka on a multi-node cluster, see Section 3.6, "Performing a graceful rolling restart of Kafka brokers" . Delete the libs and bin directories from your existing installation: rm -rf /opt/kafka/libs /opt/kafka/bin Copy the libs and bin directories from the temporary directory: cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/ If required, update the configuration files in the config directory to reflect any changes in the new Kafka version. Delete the temporary directory. rm -r /tmp/kafka Restart the updated Kafka node: Restarting nodes with combined roles /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties Restarting controller nodes /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties Restarting nodes with broker roles /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/broker.properties The Kafka broker starts using the binaries for the latest Kafka version. For information on restarting brokers in a multi-node cluster, see Section 3.6, "Performing a graceful rolling restart of Kafka brokers" . Check that Kafka is running: jcmd | grep kafka Update the Kafka metadata version: ./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7 Use the correct version for the Kafka version you are upgrading to. Note Verify that a restarted Kafka broker has caught up with the partition replicas it is following using the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics . Upgrading client applications Ensure all Kafka client applications are updated to use the new version of the client binaries as part of the upgrade process and verify their compatibility with the Kafka upgrade. If needed, coordinate with the team responsible for managing the client applications. Tip To check that a client is using the latest message format, use the kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec metric. The metric shows 0 if the latest message format is being used. 18.4. Upgrading Kafka components Upgrade Kafka components on a host machine to use the latest version of Streams for Apache Kafka. You can use the Streams for Apache Kafka installation files to upgrade the following components: Kafka Connect MirrorMaker Kafka Bridge (separate ZIP file) Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. You have downloaded the installation files . You have upgraded Kafka . If a Kafka component is running on the same host as Kafka, you'll also need to stop and start Kafka when upgrading. Procedure For each host running an instance of the Kafka component: Download the Streams for Apache Kafka or Kafka Bridge installation files from the Streams for Apache Kafka software downloads page . Note If prompted, log in to your Red Hat account. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-bin.zip file. mkdir /tmp/kafka unzip amq-streams-<version>-bin.zip -d /tmp/kafka For Kafka Bridge, extract the amq-streams-<version>-bridge-bin.zip file. If running, stop the Kafka component running on the host. Delete the libs and bin directories from your existing installation: rm -rf /opt/kafka/libs /opt/kafka/bin Copy the libs and bin directories from the temporary directory: cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/ If required, update the configuration files in the config directory to reflect any changes in the new versions. Delete the temporary directory. rm -r /tmp/kafka Start the Kafka component using the appropriate script and properties files. Starting Kafka Connect in standalone mode /opt/kafka/bin/connect-standalone.sh \ /opt/kafka/config/connect-standalone.properties <connector1> .properties [ <connector2> .properties ...] Starting Kafka Connect in distributed mode /opt/kafka/bin/connect-distributed.sh \ /opt/kafka/config/connect-distributed.properties Starting MirrorMaker 2 in dedicated mode /opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties Starting Kafka Bridge su - kafka ./bin/kafka_bridge_run.sh \ --config-file= <path> /application.properties Verify that the Kafka component is running, and producing or consuming data as expected. Verifying Kafka Connect in standalone mode is running jcmd | grep ConnectStandalone Verifying Kafka Connect in distributed mode is running jcmd | grep ConnectDistributed Verifying MirrorMaker 2 in dedicated mode is running jcmd | grep mirrorMaker Verifying Kafka Bridge is running by checking the log HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092
[ "mkdir /tmp/kafka unzip amq-streams-<version>-bin.zip -d /tmp/kafka", "/opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafka", "rm -rf /opt/kafka/libs /opt/kafka/bin", "cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/", "rm -r /tmp/kafka", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/broker.properties", "jcmd | grep kafka", "./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7", "mkdir /tmp/kafka unzip amq-streams-<version>-bin.zip -d /tmp/kafka", "rm -rf /opt/kafka/libs /opt/kafka/bin", "cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/", "rm -r /tmp/kafka", "/opt/kafka/bin/connect-standalone.sh /opt/kafka/config/connect-standalone.properties <connector1> .properties [ <connector2> .properties ...]", "/opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties", "/opt/kafka/bin/connect-mirror-maker.sh /opt/kafka/config/connect-mirror-maker.properties", "su - kafka ./bin/kafka_bridge_run.sh --config-file= <path> /application.properties", "jcmd | grep ConnectStandalone", "jcmd | grep ConnectDistributed", "jcmd | grep mirrorMaker", "HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-upgrade-str
1.3. The Cluster and Pacemaker Configuration Files
1.3. The Cluster and Pacemaker Configuration Files The configuration files for the Red Hat High Availability add-on are cluster.conf and cib.xml . Do not edit the cib.xml file directly; use the pcs interface instead. The cluster.conf file provides the cluster parameters used by corosync , the cluster manager that Pacemaker is built on. The cib.xml file is an XML file that represents both the cluster's configuration and current state of all resources in the cluster. This file is used by Pacemaker's Cluster Information Base (CIB). The contents of the CIB are automatically kept in sync across the entire cluster
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-configfileoverview-haar
Chapter 6. HardwareData [metal3.io/v1alpha1]
Chapter 6. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HardwareDataSpec defines the desired state of HardwareData. 6.1.1. .spec Description HardwareDataSpec defines the desired state of HardwareData. Type object Property Type Description hardware object The hardware discovered on the host during its inspection. 6.1.2. .spec.hardware Description The hardware discovered on the host during its inspection. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 6.1.3. .spec.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 6.1.4. .spec.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 6.1.5. .spec.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 6.1.6. .spec.hardware.nics Description Type array 6.1.7. .spec.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN. 6.1.8. .spec.hardware.nics[].vlans Description The VLANs available Type array 6.1.9. .spec.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN. Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 6.1.10. .spec.hardware.storage Description Type array 6.1.11. .spec.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description alternateNames array (string) A list of alternate Linux device names of the disk, e.g. "/dev/sda". Note that this list is not exhaustive, and names may not be stable across reboots. hctl string The SCSI location of the device model string Hardware model name string A Linux device name of the disk, e.g. "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". This will be a name that is stable across reboots if one is available. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 6.1.12. .spec.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 6.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hardwaredata GET : list objects of kind HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata DELETE : delete collection of HardwareData GET : list objects of kind HardwareData POST : create a HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} DELETE : delete a HardwareData GET : read the specified HardwareData PATCH : partially update the specified HardwareData PUT : replace the specified HardwareData 6.2.1. /apis/metal3.io/v1alpha1/hardwaredata HTTP method GET Description list objects of kind HardwareData Table 6.1. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty 6.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata HTTP method DELETE Description delete collection of HardwareData Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HardwareData Table 6.3. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty HTTP method POST Description create a HardwareData Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body HardwareData schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 202 - Accepted HardwareData schema 401 - Unauthorized Empty 6.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the HardwareData HTTP method DELETE Description delete a HardwareData Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HardwareData Table 6.10. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HardwareData Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HardwareData Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body HardwareData schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/hardwaredata-metal3-io-v1alpha1
Release notes
Release notes Red Hat OpenShift GitOps 1.15 Highlights of what is new and what has changed with this OpenShift GitOps release Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/release_notes/index
6.6. Resource Operations
6.6. Resource Operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. Table 6.4, "Properties of an Operation" summarizes the properties of a resource monitoring operation. Table 6.4. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval If set to a nonzero value, a recurring operation is created that repeats at this frequency, in seconds. A nonzero value makes sense only when the action name is set to monitor . A recurring monitor action will be executed immediately after a resource start completes, and subsequent monitor actions are scheduled starting at the time the monitor action completed. For example, if a monitor action with interval=20s is executed at 01:00:00, the monitor action does not occur at 01:00:20, but at 20 seconds after the first monitor action completes. If set to zero, which is the default value, this parameter allows you to provide values to be used for operations created by the cluster. For example, if the interval is set to zero, the name of the operation is set to start , and the timeout value is set to 40, then Pacemaker will use a timeout of 40 seconds when starting this resource. A monitor operation with a zero interval allows you to set the timeout / on-fail / enabled values for the probes that Pacemaker does at startup to get the current status of all resources when the defaults are not desirable. timeout If the operation does not complete in the amount of time set by this parameter, abort the operation and consider it failed. The default value is the value of timeout if set with the pcs resource op defaults command, or 20 seconds if it is not set. If you find that your system includes a resource that requires more time than the system allows to perform an operation (such as start , stop , or monitor ), investigate the cause and if the lengthy execution time is expected you can increase this value. The timeout value is not a delay of any kind, nor does the cluster wait the entire timeout period if the operation returns before the timeout period has completed. on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false 6.6.1. Configuring Resource Operations You can configure monitoring operations when you create a resource, using the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you can update the resource. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. To change the stop timeout operation, execute the following command. Note When you update a resource's operation with the pcs resource update command, any options you do not specifically call out are reset to their default values. 6.6.2. Configuring Global Resource Operation Defaults You can use the following command to set global default values for monitoring operations. For example, the following command sets a global default of a timeout value of 240 seconds for all monitoring operations. To display the currently configured default values for monitoring operations, do not specify any options when you execute the pcs resource op defaults command. For example, following command displays the default monitoring operation values for a cluster which has been configured with a timeout value of 240 seconds. Note that a cluster resource will use the global default only when the option is not specified in the cluster resource definition. By default, resource agents define the timeout option for all operations. For the global operation timeout value to be honored, you must create the cluster resource without the timeout option explicitly or you must remove the timeout option by updating the cluster resource, as in the following command. For example, after setting a global default of a timeout value of 240 seconds for all monitoring operations and updating the cluster resource VirtualIP to remove the timeout value for the monitor operation, the resource VirtualIP will then have timeout values for start , stop , and monitor operations of 20s, 40s and 240s, respectively. The global default value for timeout operations is applied here only on the monitor operation, where the default timeout option was removed by the command.
[ "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource update VirtualIP op stop interval=0s timeout=40s pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults [ options ]", "pcs resource op defaults timeout=240s", "pcs resource op defaults timeout: 240s", "pcs resource update VirtualIP op monitor interval=10s", "pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-resourceoperate-haar
5.238. perl-Sys-Virt
5.238. perl-Sys-Virt 5.238.1. RHBA-2012:0754 - perl-Sys-Virt bug fix and enhancement update Updated perl-Sys-Virt packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The perl-Sys-Virt packages provide application programming interfaces (APIs) to manage virtual machines from Perl with the libvirt library. The perl-Sys-Virt package has been upgraded to upstream version 0.9.10, which provides a number of bug fixes and enhancements over the version. (BZ# 752436 ) Bug Fixes BZ# 661801 Prior to this update, the perl-Sys-Virt spec file did not contain the "perl(Time::HiRes)" requirement. As a consequence, perl-Sys-Virt could not be rebuild in mock mode. This update adds the missing requirement to the spec file. Now, perl-Sys-Virt can is rebuild in mock mode as expected. BZ# 747483 Prior to this update, the perl-Sys-Virt man page did not document the "USDflags" parameter for the "get_xml_description" executable. This update modifies the man page so that the parameter is correctly documented. BZ# 748689 Prior to this update, the default settings for the remote domain memory statistics used a length of 16 bits only. As a consequence, the get_node_cpu_stats() function could send the libvirt error "code: 1, message: internal error nparams too large". This update modifies libvirt so that the maximum length is now 1024 bits. BZ# 773572 Prior to this update, the bandwidth in the "block_pull and set_block_job_speed" method was incorrectly given in Kilobytes per second (Kb/s). This update changes the bandwidth unit to Megabytes per second (Mb/s). BZ# 800766 Prior to this update, the bandwidth for maximum migration bandwidth was incorrectly given in Kilobytes per second (Kb/s). This update changes the maximum migration bandwidth unit to Megabytes per second (Mb/s). BZ# 809906 Prior to this update, the documentation for "Sys::Virt::StoragePool" incorrectly stated that the object method "get_info()" returns a hash. This update corrects this misprint and correctly states that object method returns a hash reference. Enhancement BZ# 800734 Prior to this update, the Perl API bindings could not handle tunable parameters in string format. As a consequence, the block I/O tunable parameters could not be read or updated. This update adds support for string parameters. Now, the block I/O tunable parameters can be read and updated from the Perl API. All users of perl-Sys-Virt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/perl-sys-virt
Chapter 5. Configuring multi-supplier replication with certificate-based authentication
Chapter 5. Configuring multi-supplier replication with certificate-based authentication When you set up replication between two Directory Server instances, you can use certificate-based authentication instead of using a bind DN and password to authenticate to a replication partner. You can do so by adding a new server to the replication topology and setting up replication agreements between the new host and the existing server using certificate-based authentication. Important Certificate-based authentication requires TLS-encrypted connections. 5.1. Preparing accounts and a bind group for the use in replication agreements with certificate-based authentication To use certificate-based authentication in replication agreements, first prepare the accounts and store the client certificates in the userCertificate attributes of these accounts. Additionally, this procedure creates a bind group that you later use in the replication agreements. Perform this procedure on the existing host server1.example.com . Prerequisites You enabled TLS encryption in Directory Server. You stored the client certificates in distinguished encoding rules (DER) format in the /root/server1.der and /root/server2.der files. For details about client certificates and how to request them from your certificate authority (CA), see your CA's documentation. Procedure Create the ou=services entry if it does not exist: # ldapadd -D " cn=Directory Manager " -W -H ldaps://server1.example.com -x dn: ou=services,dc=example,dc=com objectClass: organizationalunit objectClass: top ou: services Create accounts for both servers, such as cn=server1,ou=services,dc=example,dc=com and cn=server1,ou=services,dc=example,dc=com : # ldapadd -D " cn=Directory Manager " -W -H ldaps://server1.example.com -x dn: cn=server1,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server1 cn: server1 userPassword: password userCertificate:< file:// /root/server1.der adding new entry "cn=server1,ou=services,dc=example,dc=com" dn: cn=server2,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server2 cn: server2 userPassword: password userCertificate:< file:// /root/server2.der adding new entry "cn=server2,ou=services,dc=example,dc=com" Create a group, such as cn=repl_servers,dc=groups,dc=example,dc=com : # dsidm -D " cn=Directory Manager " ldaps://server1.example.com -b " dc=example,dc=com " group create --cn " repl_servers " Add the two replication accounts as members to the group: # dsidm -D " cn=Directory Manager " ldaps://server1.example.com -b " dc=example,dc=com " group add_member repl_servers "cn=server1,ou=services,dc=example,dc=com" # dsidm -D " cn=Directory Manager " ldaps://server1.example.com -b " dc=example,dc=com " group add_member repl_servers "cn=server2,ou=services,dc=example,dc=com" Additional resources Enabling TLS-encrypted connections to Directory Server 5.2. Initializing a new server using a temporary replication manager account Certificate-based authentication uses the certificates stored in the directory. However, before you initialize a new server, the database on server2.example.com is empty and the accounts with the associated certificates do not exist. Therefore, replication using certificates is not possible before the database is initialized. You can overcome this problem by initializing server2.example.com with a temporary replication manager account. Prerequisites You installed the Directory Server instance on server2.example.com . For details, see Setting up a new instance on the command line using a .inf file . The database for the dc=example,dc=com suffix exists. You enabled TLS encryption in Directory Server on both servers, server1.example.com and server2.example.com . Procedure On server2.example.com , enable replication for the dc=example,dc=com suffix: # dsconf -D " cn=Directory Manager " ldaps://server2.example.com replication enable --suffix " dc=example,dc=com " --role " supplier " --replica-id 2 --bind-dn " cn=replication manager,cn=config " --bind-passwd " password " This command configures the server2.example.com host as a supplier for the dc=example,dc=com suffix, and sets the replica ID of this host to 2 . Additionally, the command creates a temporary cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. On server1.example.com : Enable replication: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com replication enable --suffix=" dc=example,dc=com " --role=" supplier " --replica-id=" 1 " Create a temporary replication agreement which uses the temporary account from the step for authentication: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt create --suffix=" dc=example,dc=com " --host=" server1.example.com " --port= 636 --conn-protocol= LDAPS --bind-dn=" cn=Replication Manager,cn=config " --bind-passwd=" password " --bind-method= SIMPLE --init temporary_agreement Verification Verify that the initialization was successful: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt init-status --suffix " dc=example,dc=com " temporary_agreement Agreement successfully initialized. Additional resources Installing Red Hat Directory Server Enabling TLS-encrypted connections to Directory Server 5.3. Configuring multi-supplier replication with certificate-based authentication In a multi-supplier replication environment with certificate-based authentication, the replicas authenticate each others using certificates. Prerequisites You set up certificate-based authentication on both hosts, server1.example.com and server2.example.com . Directory Server trusts the certificate authority (CA) that issues the client certificates. The client certificates meet the requirements set in /etc/dirsrv/slapd-instance_name/certmap.conf on the servers. Procedure On server1.example.com : Remove the temporary replication agreement: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt delete --suffix=" dc=example,dc=com " temporary_agreement Add the cn=repl_servers,dc=groups,dc=example,dc=com bind group to the replication settings: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group "cn=repl_servers,dc=groups,dc=example,dc=com" Configure Directory Server to automatically check for changes in the bind group: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group-interval= 0 On server2.example.com : Remove the temporary replication manager account: # dsconf -D " cn=Directory Manager " ldaps://server2.example.com replication delete-manager --suffix=" dc=example,dc=com " --name=" Replication Manager " Add the cn=repl_servers,dc=groups,dc=example,dc=com bind group to the replication settings: # dsconf -D " cn=Directory Manager " ldaps://server2.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group "cn=repl_servers,dc=groups,dc=example,dc=com" Configure Directory Server to automatically check for changes in the bind group: # dsconf -D " cn=Directory Manager " ldap://server2.example.com replication set --suffix=" dc=example,dc=com " --repl-bind-group-interval=0 Create the replication agreement with certificate-based authentication: dsconf -D " cn=Directory Manager " ldaps://server2.example.com repl-agmt create --suffix=" dc=example,dc=com " --host=" server1.example.com " --port= 636 --conn-protocol= LDAPS --bind-method=" SSLCLIENTAUTH " --init server2-to-server1 On server1.example.com , create the replication agreement with certificate-based authentication: dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt create --suffix=" dc=example,dc=com " --host=" server2.example.com " --port= 636 --conn-protocol= LDAPS --bind-method=" SSLCLIENTAUTH " --init server1-to-server2 Verification Verify on each server that the initialization was successful: # dsconf -D " cn=Directory Manager " ldaps://server1.example.com repl-agmt init-status --suffix " dc=example,dc=com " server1-to-server2 Agreement successfully initialized. # dsconf -D " cn=Directory Manager " ldaps://server2.example.com repl-agmt init-status --suffix " dc=example,dc=com " server2-to-server1 Agreement successfully initialized. Additional resources Setting up certificate-based authentication Changing the CA trust flags
[ "ldapadd -D \" cn=Directory Manager \" -W -H ldaps://server1.example.com -x dn: ou=services,dc=example,dc=com objectClass: organizationalunit objectClass: top ou: services", "ldapadd -D \" cn=Directory Manager \" -W -H ldaps://server1.example.com -x dn: cn=server1,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server1 cn: server1 userPassword: password userCertificate:< file:// /root/server1.der adding new entry \"cn=server1,ou=services,dc=example,dc=com\" dn: cn=server2,ou=services,dc=example,dc=com objectClass: top objectClass: person objectClass: inetOrgPerson sn: server2 cn: server2 userPassword: password userCertificate:< file:// /root/server2.der adding new entry \"cn=server2,ou=services,dc=example,dc=com\"", "dsidm -D \" cn=Directory Manager \" ldaps://server1.example.com -b \" dc=example,dc=com \" group create --cn \" repl_servers \"", "dsidm -D \" cn=Directory Manager \" ldaps://server1.example.com -b \" dc=example,dc=com \" group add_member repl_servers \"cn=server1,ou=services,dc=example,dc=com\" dsidm -D \" cn=Directory Manager \" ldaps://server1.example.com -b \" dc=example,dc=com \" group add_member repl_servers \"cn=server2,ou=services,dc=example,dc=com\"", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com replication enable --suffix \" dc=example,dc=com \" --role \" supplier \" --replica-id 2 --bind-dn \" cn=replication manager,cn=config \" --bind-passwd \" password \"", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com replication enable --suffix=\" dc=example,dc=com \" --role=\" supplier \" --replica-id=\" 1 \"", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt create --suffix=\" dc=example,dc=com \" --host=\" server1.example.com \" --port= 636 --conn-protocol= LDAPS --bind-dn=\" cn=Replication Manager,cn=config \" --bind-passwd=\" password \" --bind-method= SIMPLE --init temporary_agreement", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt init-status --suffix \" dc=example,dc=com \" temporary_agreement Agreement successfully initialized.", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt delete --suffix=\" dc=example,dc=com \" temporary_agreement", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group \"cn=repl_servers,dc=groups,dc=example,dc=com\"", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group-interval= 0", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com replication delete-manager --suffix=\" dc=example,dc=com \" --name=\" Replication Manager \"", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group \"cn=repl_servers,dc=groups,dc=example,dc=com\"", "dsconf -D \" cn=Directory Manager \" ldap://server2.example.com replication set --suffix=\" dc=example,dc=com \" --repl-bind-group-interval=0", "dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com repl-agmt create --suffix=\" dc=example,dc=com \" --host=\" server1.example.com \" --port= 636 --conn-protocol= LDAPS --bind-method=\" SSLCLIENTAUTH \" --init server2-to-server1", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt create --suffix=\" dc=example,dc=com \" --host=\" server2.example.com \" --port= 636 --conn-protocol= LDAPS --bind-method=\" SSLCLIENTAUTH \" --init server1-to-server2", "dsconf -D \" cn=Directory Manager \" ldaps://server1.example.com repl-agmt init-status --suffix \" dc=example,dc=com \" server1-to-server2 Agreement successfully initialized. dsconf -D \" cn=Directory Manager \" ldaps://server2.example.com repl-agmt init-status --suffix \" dc=example,dc=com \" server2-to-server1 Agreement successfully initialized." ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_configuring-multi-supplier-replication-with-certificate-based-authentication_configuring-and-managing-replication
Chapter 21. Upgrading a split Controller overcloud
Chapter 21. Upgrading a split Controller overcloud This scenario contains an example upgrade process for an overcloud with Controller node services split on to multiple nodes. This includes the following node types: Multiple split high availability services using Pacemaker Multiple split Controller services Three Ceph MON nodes Three Ceph Storage nodes Multiple Compute nodes 21.1. Running the overcloud upgrade preparation The upgrade requires running openstack overcloud upgrade prepare command, which performs the following tasks: Updates the overcloud plan to OpenStack Platform 16.2 Prepares the nodes for the upgrade Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Run the upgrade preparation command: Include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( rhsm.yaml ) with the registration and subscription parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( neutron-ovs.yaml ) to maintain OVS compatibility. Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file using --roles-file . If applicable, your composable network ( network_data ) file using --networks-file . If you use a custom stack name, pass the name with the --stack option. Wait until the upgrade preparation completes. Download the container images: 21.2. Upgrading Pacemaker-based nodes Upgrade all nodes that host Pacemaker services to OpenStack Platform 16.2. The following roles include Pacemaker-based services: Controller Database (MySQL, Galera) Messaging (RabbitMQ) Load Balancing (HAProxy) Any other role that contains the following services: OS::TripleO::Services::Pacemaker OS::TripleO::Services::PacemakerRemote This process involves upgrading each node starting with the bootstrap node. Procedure Source the stackrc file: Identify the bootstrap node by running the following command on the undercloud node: Optional: Replace <stack_name> with the name of the stack. If not specified, the default is overcloud . Upgrade the bootstrap node: If the node any contains Ceph Storage containers, run the external upgrade command with the ceph_systemd tag: Replace <stack_name> with the name of your stack. This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the external upgrade command with the system_upgrade_transfer_data tag: This command copies the latest version of the database from an existing node to the bootstrap node. Run the upgrade command with the nova_hybrid_state tag and run only the upgrade_steps_playbook.yaml playbook: This command launches temporary 16.2 containers on Compute nodes to help facilitate workload migration when you upgrade Compute nodes at a later step. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. Upgrade each Pacemaker-based node: If the node any contains Ceph Storage containers, run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag on the node: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. In addition to this node, include any previously upgraded node in the --limit option. Repeat the upgrade process on each Pacemaker-based node until you have upgraded all Pacemaker-based node. 21.3. Upgrading non-Pacemaker Controller nodes Upgrade all nodes without Pacemaker-based services to OpenStack Platform 16.2. These nodes usually contain a specific OpenStack service. Examples of roles without Pacemaker-based services include the following: Networker Ironic Conductor Object Storage Any custom roles with services split or scaled from standard Controller nodes Do not include the following nodes in this grouping: Any Compute nodes Any Ceph Storage nodes This process involves upgrading each node. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. Repeat the upgrade process on each node until you have upgraded all Controller-based node. 21.4. Upgrading the operating system for Ceph MON nodes Upgrade the operating system for each Ceph MON node. It is recommended to upgrade each Ceph MON node individually to maintain a quorum among the nodes. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Select a Ceph MON node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph MON node. This step does not upgrade the Ceph MON nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. Select the Ceph MON node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph MON node. This step does not upgrade the Ceph MON nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. Select the final Ceph MON node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph MON node. This step does not upgrade the Ceph MON nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. 21.5. Upgrading the operating system for Ceph Storage nodes If your deployment uses a Red Hat Ceph Storage cluster that was deployed using director, you must upgrade the operating system for each Ceph Storage nodes. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Select a Ceph Storage node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph Storage node. This step does not upgrade the Ceph Storage nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. Select the Ceph Storage node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph Storage node. This step does not upgrade the Ceph Storage nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. Select the final Ceph Storage node and upgrade the operating system: Run the external upgrade command with the ceph_systemd tag: This command performs the following functions: Changes the systemd units that control the Ceph Storage containers to use Podman management. Limits actions to the selected node using the ceph_ansible_limit variable. This step is a preliminary measure to prepare the Ceph Storage services for The leapp upgrade. Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command runs the config-download playbooks and configures the composable services on the Ceph Storage node. This step does not upgrade the Ceph Storage nodes to Red Hat Ceph Storage 4. The Red Hat Ceph Storage 4 upgrade occurs in a later procedure. 21.6. Upgrading Compute nodes Upgrade all the Compute nodes to OpenStack Platform 16.2. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Migrate your instances. For more information on migration strategies, see Migrating virtual machines between Compute nodes . Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. To upgrade multiple Compute nodes in parallel, set the --limit option to a comma-separated list of nodes that you want to upgrade. First perform the system_upgrade task: Then perform the standard OpenStack service upgrade: 21.7. Synchronizing the overcloud stack The upgrade requires an update the overcloud stack to ensure that the stack resource structure and parameters align with a fresh deployment of OpenStack Platform 16.2. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Edit the containers-prepare-parameter.yaml file and remove the following parameters and their values: ceph3_namespace ceph3_tag ceph3_image name_prefix_stein name_suffix_stein namespace_stein tag_stein To re-enable fencing in your overcloud, set the EnableFencing parameter to true in the fencing.yaml environment file. Run the upgrade finalization command: Include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( fencing.yaml ) with the EnableFencing parameter set to true . The environment file ( rhsm.yaml ) with the registration and subscription parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( neutron-ovs.yaml ) to maintain OVS compatibility. Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file using --roles-file . If applicable, your composable network ( network_data ) file using --networks-file . If you use a custom stack name, pass the name with the --stack option. Wait until the stack synchronization completes. Important You do not need the upgrades-environment.yaml file for any further deployment operations.
[ "source ~/stackrc", "openstack overcloud upgrade prepare --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/templates/rhsm.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml ...", "openstack overcloud external-upgrade run --stack STACK NAME --tags container_image_prepare", "source ~/stackrc", "tripleo-ansible-inventory --list [--stack <stack_name>] |jq .overcloud_Controller.hosts[0]", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-0", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-0", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags system_upgrade_transfer_data", "openstack overcloud upgrade run [--stack <stack_name>] --playbook upgrade_steps_playbook.yaml --tags nova_hybrid_state --limit all", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags ceph_systemd -e ceph_ansible_limit=overcloud-database-0", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-database-0", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0,overcloud-database-0", "source ~/stackrc", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-networker-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-networker-0", "source ~/stackrc", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephmon-0", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephmon-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephmon-0", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephmon-1", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephmon-1", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephmon-1", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephmon-2", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephmon-2", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephmon-2", "source ~/stackrc", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephstorage-0", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephstorage-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephstorage-0", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephstorage-1", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephstorage-1", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephstorage-1", "openstack overcloud external-upgrade run --stack STACK NAME --tags ceph_systemd -e ceph_ansible_limit=overcloud-cephstorage-2", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-cephstorage-2", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-cephstorage-2", "source ~/stackrc", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-compute-0", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2", "source ~/stackrc", "openstack overcloud upgrade converge --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/templates/rhsm.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml ..." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/upgrading-a-split-controller-overcloud
Chapter 5. Configuring time-based account lockout policies
Chapter 5. Configuring time-based account lockout policies You can use the Account Policy plug-in to configure different time-based lockout policies, such as: Automatically disabling accounts a certain amount of time the last successful login Automatically disabling accounts a certain amount of time after you created them Automatically disabling accounts a certain amount of time after password expiry Automatically disabling account on both account inactivity and password expiration 5.1. Automatically disabling accounts a certain amount of time the last successful login Follow this procedure to configure a time-based lockout policy that inactivates users under the dc=example,dc=com entry who do not log in for more than 21 days. This the account inactivity feature to ensure, for example if an employee left the company and the administrator forgets to delete the account, that Directory Server inactivates the account after a certain amount of time. Procedure Enable the Account Policy plug-in: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy enable Configure the plug-in configuration entry: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy config-entry set " cn=config,cn=Account Policy Plugin,cn=plugins,cn=config " --always-record-login yes --state-attr lastLoginTime --alt-state-attr 1.1 --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit This command uses the following options: --always-record-login yes : Enables logging of the login time. This is required to use Class of Service (CoS) or roles with account policies, even if it does not have the acctPolicySubentry attribute set. --state-attr lastLoginTime : Configures that the Account Policy plug-in stores the last login time in the lastLoginTime attribute of users. --alt-state-attr 1.1 : Disables using an alternative attribute to check if the primary one does not exist. By default, Directory Server uses the createTimestamp attribute as alternative. However, this causes that Directory Server logs out existing users automatically if their account do not have the lastLoginTime attribute set and createTimestamp is older than the configured inactivity period. Disabling the alternative attribute causes that Directory Server automatically adds the lastLoginTime attribute to user entries when they log in the time. --spec-attr acctPolicySubentry : Configures Directory Server to apply the policy to entries that have the acctPolicySubentry attribute set. You configure this attribute in the CoS entry. --limit-attr accountInactivityLimit : Configures that the accountInactivityLimit attribute in the account inactivation policy entry stores the inactivity time. Restart the instance: # dsctl instance_name restart Create the account inactivation policy entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=Account Inactivation Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 1814400 cn: Account Inactivation Policy The value in the accountInactivityLimit attribute configures that Directory Server inactivates accounts 1814400 seconds (21 days) after the last log in. Create the CoS template entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=TemplateCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Inactivation Policy,dc=example,dc=com This template entry references the account inactivation policy. Create the CoS definition entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=DefinitionCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TemplateCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default This definition entry references the CoS template entry and causes that the acctPolicySubentry attribute appears in each user entry with a value set to cn=Account Inactivation Policy,dc=example,dc=com . Verification Set the lastLoginTime attribute of a user to a value that is older than the inactivity time you configured: # ldapmodify -H ldap://server.example.com -x -D " cn=Directory Manager " -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210101000000Z Try to connect to the directory as a this user: # ldapsearch -H ldap://server.example.com -x -D " uid=example,ou=People,dc=example,dc=com " -W -b " dc=example,dc=com " ldap_bind: Constraint violation (19) additional info: Account inactivity limit exceeded. Contact system administrator to reset. If Directory Server denies access and returns this error, account inactivity works. Additional resources Re-enabling accounts that reached the inactivity limit 5.2. Automatically disabling accounts a certain amount of time after you created them Follow this procedure to configure that accounts in the dc=example,dc=com entry expire 60 days after the administrator created them. Use the account expiration feature, for example, to ensure that accounts for external workers are locked a certain amount of time after they have been created. Procedure Enable the Account Policy plug-in: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy enable Configure the plug-in configuration entry: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy config-entry set " cn=config,cn=Account Policy Plugin,cn=plugins,cn=config " --always-record-login yes --state-attr createTimestamp --alt-state-attr 1.1 --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit This command uses the following options: --always-record-login yes : Enables logging of the login time. This is required to use Class of Service (CoS) or roles with account policies, even if it does not have the acctPolicySubentry attribute set. --state-attr createTimestamp : Configures that the Account Policy plug-in uses the value of the createTimestamp attribute to calculate whether an account is expired. --alt-state-attr 1.1 : Disables using an alternative attribute to check if the primary one does not exist. --spec-attr acctPolicySubentry : Configures Directory Server to apply the policy to entries that have the acctPolicySubentry attribute set. You configure this attribute in the CoS entry. --limit-attr accountInactivityLimit : Configures that the accountInactivityLimit attribute in the account expiration policy entry stores the maximum age. Restart the instance: # dsctl instance_name restart Create the account expiration policy entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=Account Expiration Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 5184000 cn: Account Expiration Policy The value in the accountInactivityLimit attribute configures that accounts expire 5184000 seconds (60 days) after they have been created. Create the CoS template entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=TemplateCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Expiration Policy,dc=example,dc=com This template entry references the account expiration policy. Create the CoS definition entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=DefinitionCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TemplateCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default This definition entry references the CoS template entry and causes that the acctPolicySubentry attribute appears in each user entry with a value set to cn=Account Expiration Policy,dc=example,dc=com . Verification Try to connect to the directory as a user stored in the dc=example,dc=com entry whose createTimestamp attribute is set to a value more than 60 days ago: # ldapsearch -H ldap://server.example.com -x -D " uid=example,dc=example,dc=com " -W -b " dc=example,dc=com " ldap_bind: Constraint violation (19) additional info: Account inactivity limit exceeded. Contact system administrator to reset. If Directory Server denies access and returns this error, account expiration works. Additional resources Re-enabling accounts that reached the inactivity limit 5.3. Automatically disabling accounts a certain amount of time after password expiry Follow this procedure to configure a time-based lockout policy that inactivates users under the dc=example,dc=com entry who do not change their password for more than 28 days. Prerequisites Users must have the passwordExpirationTime attribute set in their entry. Procedure Enable the password expiration feature: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace passwordExp=on Enable the Account Policy plug-in: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy enable Configure the plug-in configuration entry: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin account-policy config-entry set " cn=config,cn=Account Policy Plugin,cn=plugins,cn=config " --always-record-login yes --always-record-login-attr lastLoginTime --state-attr non_existent_attribute --alt-state-attr passwordExpirationTime --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit This command uses the following options: --always-record-login yes : Enables logging of the login time. This is required to use Class of Service (CoS) or roles with account policies, even if it does not have the acctPolicySubentry attribute set. --always-record-login-attr lastLoginTime : Configures that the Account Policy plug-in stores the last login time in the lastLoginTime attribute of users. --state-attr non_existent_attribute : Sets the primary time attribute used to evaluate an account policy to a non-existent dummy attribute name. --alt-state-attr `passwordExpirationTime : Configures the plug-in to use the passwordExpirationTime attribute as the alternative attribute to check. --spec-attr acctPolicySubentry : Configures Directory Server to apply the policy to entries that have the acctPolicySubentry attribute set. You configure this attribute in the CoS entry. --limit-attr accountInactivityLimit : Configures that the accountInactivityLimit attribute in the account policy entry stores the time when accounts are inactivated after their last password change. Restart the instance: # dsctl instance_name restart Create the account inactivation policy entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=Account Inactivation Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 2419200 cn: Account Inactivation Policy The value in the accountInactivityLimit attribute configures that Directory Server inactivates accounts 2419200 seconds (28 days) after the password was changed. Create the CoS template entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=TemplateCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Inactivation Policy,dc=example,dc=com This template entry references the account inactivation policy. Create the CoS definition entry: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: cn=DefinitionCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TemplateCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default This definition entry references the CoS template entry and causes that the acctPolicySubentry attribute appears in each user entry with a value set to cn=Account Inactivation Policy,dc=example,dc=com . Verification Set the passwordExpirationTime attribute of a user to a value that is older than the inactivity time you configured: # ldapmodify -H ldap://server.example.com -x -D " cn=Directory Manager " -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: passwordExpirationTime passwordExpirationTime: 20210101000000Z Try to connect to the directory as a this user: # ldapsearch -H ldap://server.example.com -x -D " uid=example,ou=People,dc=example,dc=com " -W -b " dc=example,dc=com " ldap_bind: Constraint violation (19) additional info: Account inactivity limit exceeded. Contact system administrator to reset. If Directory Server denies access and returns this error, account inactivity works. Additional resources Re-enabling accounts that reached the inactivity limit 5.4. Automatically disabling account on both account inactivity and password expiration You can apply both account inactivity and password expiration when a user authenticates by using the checkAllStateAttrs setting. By default, when checkAllStateAttrs is not present in the plug-in configuration entry, or when you set this parameter to no , the plug-in checks for the state attribute lastLoginTime . If the attribute is not present in the entry, the plug-in checks the alternate state attribute. You can set the main state attribute to a non-existent attribute and set the alternate state attribute to passwordExpirationtime when you want the plug-in to handle expiration based on the passwordExpirationtime attribute. When you enable this parameter it check's the main state attribute and if the account is fine it then check's the alternate state attribute. This differs from the password policy's password expiration, in that the account policy plug-in completely disables the account if the passwordExpirationtime exceeds the inactivity limit. While with the password policy expiration the user can still log in and change their password. The account policy plug-in completely blocks the user from doing anything and an administrator must reset the account. Procedure Create the plug-in configuration entry and enable the setting: Restart the server to load the new plug-in configuration: Warning The checkAllStateAttrs setting is designed to only work when the alternate state attribute is set to passwordExpiratontime . Setting it to createTimestamp can cause undesired results and entries might get locked out.
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy enable", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy config-entry set \" cn=config,cn=Account Policy Plugin,cn=plugins,cn=config \" --always-record-login yes --state-attr lastLoginTime --alt-state-attr 1.1 --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit", "dsctl instance_name restart", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=Account Inactivation Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 1814400 cn: Account Inactivation Policy", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=TemplateCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Inactivation Policy,dc=example,dc=com", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=DefinitionCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TemplateCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default", "ldapmodify -H ldap://server.example.com -x -D \" cn=Directory Manager \" -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210101000000Z", "ldapsearch -H ldap://server.example.com -x -D \" uid=example,ou=People,dc=example,dc=com \" -W -b \" dc=example,dc=com \" ldap_bind: Constraint violation (19) additional info: Account inactivity limit exceeded. Contact system administrator to reset.", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy enable", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy config-entry set \" cn=config,cn=Account Policy Plugin,cn=plugins,cn=config \" --always-record-login yes --state-attr createTimestamp --alt-state-attr 1.1 --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit", "dsctl instance_name restart", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=Account Expiration Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 5184000 cn: Account Expiration Policy", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=TemplateCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Expiration Policy,dc=example,dc=com", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=DefinitionCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TemplateCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default", "ldapsearch -H ldap://server.example.com -x -D \" uid=example,dc=example,dc=com \" -W -b \" dc=example,dc=com \" ldap_bind: Constraint violation (19) additional info: Account inactivity limit exceeded. Contact system administrator to reset.", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace passwordExp=on", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy enable", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin account-policy config-entry set \" cn=config,cn=Account Policy Plugin,cn=plugins,cn=config \" --always-record-login yes --always-record-login-attr lastLoginTime --state-attr non_existent_attribute --alt-state-attr passwordExpirationTime --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit", "dsctl instance_name restart", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=Account Inactivation Policy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy accountInactivityLimit: 2419200 cn: Account Inactivation Policy", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=TemplateCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: cosTemplate acctPolicySubentry: cn=Account Inactivation Policy,dc=example,dc=com", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=DefinitionCoS,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=TemplateCoS,dc=example,dc=com cosAttribute: acctPolicySubentry default operational-default", "ldapmodify -H ldap://server.example.com -x -D \" cn=Directory Manager \" -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: passwordExpirationTime passwordExpirationTime: 20210101000000Z", "ldapsearch -H ldap://server.example.com -x -D \" uid=example,ou=People,dc=example,dc=com \" -W -b \" dc=example,dc=com \" ldap_bind: Constraint violation (19) additional info: Account inactivity limit exceeded. Contact system administrator to reset.", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin account-policy config-entry set \"cn=config,cn=Account Policy Plugin,cn=plugins,cn=config\" --always-record-login yes --state-attr lastLoginTime --alt-state-attr 1.1 --spec-attr acctPolicySubentry --limit-attr accountInactivityLimit --check-all-state-attrs yes", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_access_control/assembly_configuring-time-based-account-lockout-policies_managing-access-control
Using the AMQ C++ Client
Using the AMQ C++ Client Red Hat AMQ 2021.Q1 For Use with AMQ Clients 2.9
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/index
Chapter 20. JbodStorage schema reference
Chapter 20. JbodStorage schema reference Used in: KafkaClusterSpec , KafkaNodePoolSpec The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Property type Description type string Must be jbod . volumes EphemeralStorage , PersistentClaimStorage array List of volumes as Storage objects representing the JBOD disks array.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-jbodstorage-reference
Chapter 5. Configuring PCI passthrough
Chapter 5. Configuring PCI passthrough You can use PCI passthrough to attach a physical PCI device, such as a graphics card or a network device, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host. Important Using PCI passthrough with routed provider networks The Compute service does not support single networks that span multiple provider networks. When a network contains multiple physical networks, the Compute service only uses the first physical network. Therefore, if you are using routed provider networks you must use the same physical_network name across all the Compute nodes. If you use routed provider networks with VLAN or flat networks, you must use the same physical_network name for all segments. You then create multiple segments for the network and map the segments to the appropriate subnets. To enable your cloud users to create instances with PCI devices attached, you must complete the following: Designate Compute nodes for PCI passthrough. Configure the Compute nodes for PCI passthrough that have the required PCI devices. Deploy the overcloud. Create a flavor for launching instances with PCI devices attached. Prerequisites The Compute nodes have the required PCI devices. 5.1. Designating Compute nodes for PCI passthrough To designate Compute nodes for instances with physical PCI devices attached, you must create a new role file to configure the PCI passthrough role, and configure the bare metal nodes with a PCI passthrough resource class to use to tag the Compute nodes for PCI passthrough. Note The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_pci_passthrough.yaml that includes the Controller , Compute , and ComputePCI roles, along with any other roles that you need for the overcloud: Open roles_data_pci_passthrough.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputePCI Role name name: Compute name: ComputePCI description Basic Compute Node role PCI Passthrough Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputepci-%index% deprecated_nic_config_name compute.yaml compute-pci-passthrough.yaml Register the PCI passthrough Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Tag each bare metal node that you want to designate for PCI passthrough with a custom PCI passthrough resource class: Replace <node> with the ID of the bare metal node. Add the ComputePCI role to your node definition file, overcloud-baremetal-deploy.yaml , and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes: 1 You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Director Installation and Usage guide. If you do not define the network definitions by using the network_config property, then the default network definitions are used. For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes . For an example node definition file, see Example node definition file . Run the provisioning command to provision the new nodes for your role: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you do not define the network definitions by using the network_config property, then the default network definitions are used. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : If you did not run the provisioning command with the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files: Replace <pci_passthrough_net_top> with the name of the file that contains the network topology of the ComputePCI role, for example, compute.yaml to use the default network topology. 5.2. Configuring a PCI passthrough Compute node To enable your cloud users to create instances with PCI devices attached, you must configure both the Compute nodes that have the PCI devices and the Controller nodes. Procedure Create an environment file to configure the Controller node on the overcloud for PCI passthrough, for example, pci_passthrough_controller.yaml . Add PciPassthroughFilter to the NovaSchedulerEnabledFilters parameter in pci_passthrough_controller.yaml : To specify the PCI alias for the devices on the Controller node, add the following configuration to pci_passthrough_controller.yaml : For more information about configuring the device_type field, see PCI passthrough device type field . Note If the nova-api service is running in a role different from the Controller role, replace ControllerExtraConfig with the user role in the format <Role>ExtraConfig . Optional: To set a default NUMA affinity policy for PCI passthrough devices, add numa_policy to the nova::pci::aliases: configuration from step 3: To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthrough_compute.yaml . To specify the available PCIs for the devices on the Compute node, use the vendor_id and product_id options to add all matching PCI devices to the pool of PCI devices available for passthrough to instances. For example, to add all Intel(R) Ethernet Controller X710 devices to the pool of PCI devices available for passthrough to instances, add the following configuration to pci_passthrough_compute.yaml : For more information about how to configure NovaPCIPassthrough , see Guidelines for configuring NovaPCIPassthrough . You must create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the PCI passthrough Compute node, add the following to pci_passthrough_compute.yaml : Note The Compute node aliases must be identical to the aliases on the Controller node. Therefore, if you added numa_affinity to nova::pci::aliases in pci_passthrough_controller.yaml , then you must also add it to nova::pci::aliases in pci_passthrough_compute.yaml . To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the KernelArgs parameter to pci_passthrough_compute.yaml . For example, use the following KernalArgs settings to enable an Intel IOMMU: To enable an AMD IOMMU, set KernelArgs to "amd_iommu=on iommu=pt" . Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Add your custom environment files to the stack with your other environment files and deploy the overcloud: Create and configure the flavors that your cloud users can use to request the PCI devices. The following example requests two devices, each with a vendor ID of 8086 and a product ID of 1572 , using the alias defined in step 7: Optional: To override the default NUMA affinity policy for PCI passthrough devices, you can add the NUMA affinity policy property key to the flavor or the image: To override the default NUMA affinity policy by using the flavor, add the hw:pci_numa_affinity_policy property key: For more information about the valid values for hw:pci_numa_affinity_policy , see Flavor metadata . To override the default NUMA affinity policy by using the image, add the hw_pci_numa_affinity_policy property key: Note If you set the NUMA affinity policy on both the image and the flavor then the property values must match. The flavor setting takes precedence over the image and default settings. Therefore, the configuration of the NUMA affinity policy on the image only takes effect if the property is not set on the flavor. Verification Create an instance with a PCI passthrough device: Log in to the instance as a cloud user. For more information, see Connecting to an instance . To verify that the PCI device is accessible from the instance, enter the following command from the instance: 5.3. PCI passthrough device type field The Compute service categorizes PCI devices into one of three types, depending on the capabilities the devices report. The following lists the valid values that you can set the device_type field to: type-PF The device supports SR-IOV and is the parent or root device. Specify this device type to passthrough a device that supports SR-IOV in its entirety. type-VF The device is a child device of a device that supports SR-IOV. type-PCI The device does not support SR-IOV. This is the default device type if the device_type field is not set. Note You must configure the Compute and Controller nodes with the same device_type . 5.4. Guidelines for configuring NovaPCIPassthrough Do not use the devname parameter when configuring PCI passthrough, as the device name of a NIC can change. Instead, use vendor_id and product_id because they are more stable, or use the address of the NIC. To pass through a specific Physical Function (PF), you can use the address parameter because the PCI address is unique to each device. Alternatively, you can use the product_id parameter to pass through a PF, but you must also specify the address of the PF if you have multiple PFs of the same type. To pass through all the Virtual Functions (VFs) specify only the product_id and vendor_id of the VFs that you want to use for PCI passthrough. You must also specify the address of the VF if you are using SRIOV for NIC partitioning and you are running OVS on a VF. To pass through only the VFs for a PF but not the PF itself, you can use the address parameter to specify the PCI address of the PF and product_id to specify the product ID of the VF. Configuring the address parameter The address parameter specifies the PCI address of the device. You can set the value of the address parameter using either a String or a dict mapping. String format If you specify the address using a string you can include wildcards (*), as shown in the following example: Dictionary format If you specify the address using the dictionary format you can include regular expression syntax, as shown in the following example: Note The Compute service restricts the configuration of address fields to the following maximum values: domain - 0xFFFF bus - 0xFF slot - 0x1F function - 0x7 The Compute service supports PCI devices with a 16-bit address domain. The Compute service ignores PCI devices with a 32-bit address domain.
[ "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_pci_passthrough.yaml Compute:ComputePCI Compute Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack baremetal node set --resource-class baremetal.PCI-PASSTHROUGH <node>", "- name: Controller count: 3 - name: Compute count: 3 - name: ComputePCI count: 1 defaults: resource_class: baremetal.PCI-PASSTHROUGH network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1", "(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml", "(undercloud)USD watch openstack baremetal node list", "parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputePCINetworkConfigTemplate: /home/stack/templates/nic-configs/<pci_passthrough_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2", "parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter", "parameter_defaults: ControllerExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\"", "parameter_defaults: ControllerExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\" numa_policy: \"preferred\"", "parameter_defaults: ComputePCIParameters: NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1572\"", "parameter_defaults: ComputePCIExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\"", "parameter_defaults: ComputePCIParameters: KernelArgs: \"intel_iommu=on iommu=pt\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_pci_passthrough.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/pci_passthrough_controller.yaml -e /home/stack/templates/pci_passthrough_compute.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml", "(overcloud)USD openstack flavor set --property \"pci_passthrough:alias\"=\"a1:2\" device_passthrough", "(overcloud)USD openstack flavor set --property \"hw:pci_numa_affinity_policy\"=\"required\" device_passthrough", "(overcloud)USD openstack image set --property hw_pci_numa_affinity_policy=required device_passthrough_image", "openstack server create --flavor device_passthrough --image <image> --wait test-pci", "lspci -nn | grep <device_name>", "NovaPCIPassthrough: - address: \"*:0a:00.*\" physical_network: physnet1", "NovaPCIPassthrough: - address: domain: \".*\" bus: \"02\" slot: \"01\" function: \"[0-2]\" physical_network: net1" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-pci-passthrough_pci-passthrough
7.187. policycoreutils
7.187. policycoreutils 7.187.1. RHBA-2013:0396 - policycoreutils bug fix and enhancement update Updated policycoreutils packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The policycoreutils packages contain the policy core utilities that are required for basic operation of SELinux. These utilities include load_policy to load policies, setfiles to label file systems, newrole to switch roles, and run_init to run /etc/init.d scripts in the proper context. Bug Fixes BZ# 816460 , BZ# 885527 Previously, when the policycoreutils-gui utility was used to add an SELinux policy for a socket file, policycoreutils-gui failed with a traceback. This bug has been fixed, policycoreutils-gui now succeeds, and the SELinux policy is now added in this scenario. BZ#824779 Due to a bug in the code, when the restorecon utility failed, it returned the success exit code. This bug has been fixed and restorecon now returns appropriate exit codes. BZ# 843727 When multiple type accesses from the same role occurred, the audit2allow utility produced policy files that could not be parsed by the checkmodule compiler. With this update, audit2allow produces correct policy files which can be compiled by checkmodule. BZ# 876971 The restorecond init script allows to use the "reload" operation. Previously, the usage message produced by restorecond did not mention the operation. The operation has been added to the usage message, which is now complete. BZ#882862 Prior to this update, the audit2allow utility produced a confusing output when one of the several processed AVCs could be allowed by a boolean, as it was not clear which AVC the message was related to. The layout of the output has been corrected and the audit2allow output no longer causes confusion. BZ# 893065 Due to a regression, the vdsm package failed to be installed on Red Hat Enterprise Linux 6.4 if SELinux was disabled. A patch which enables the vdsm installation has been provided. Enhancements BZ#834160 A new function to the semanage utility has been implemented. Now, the user is able to notice that a specified file context semanage command is wrong and an appropriate error message is returned. BZ#851479 With this update, the restorecon utility now returns a warning message for paths for which a default SELinux security context is not defined in the policy. Users of policycoreutils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/policycoreutils
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_nodes/providing-feedback-on-red-hat-documentation_rhodf
Chapter 2. Opting out of Telemetry
Chapter 2. Opting out of Telemetry The decision to opt out of telemetry should be based on your specific needs and requirements, as well as any applicable regulations or policies that you need to comply with. 2.1. Consequences of disabling Telemetry In Red Hat Advanced Cluster Security for Kubernetes (RHACS) version 4.0, you can opt out of Telemetry. However, telemetry is embedded as a core component, so opting out is strongly discouraged. Opting out of telemetry limits the ability of Red Hat to understand how everyone uses the product and which areas to prioritize for improvements. 2.2. Disabling Telemetry If you have configured Telemetry by setting the key in your environment, you can disable Telemetry data collection from the Red Hat Advanced Cluster Security for Kubernetes (RHACS) user interface (UI). Procedure In the RHACS portal, go to Platform Configuration > System Configuration . In the System Configuration header, click Edit . Scroll down and ensure that Online Telemetry Data Collection is set to Disabled.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/telemetry/opting-out-of-telemetry
Part II. Notable Bug Fixes
Part II. Notable Bug Fixes This part describes bugs fixed in Red Hat Enterprise Linux 7.4 that have a significant impact on users.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug-fixes
OperatorHub APIs
OperatorHub APIs OpenShift Container Platform 4.13 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/index
Release notes for Eclipse Temurin 21.0.5
Release notes for Eclipse Temurin 21.0.5 Red Hat build of OpenJDK 21 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.5/index
Chapter 6. Installing RHEL AI on Azure
Chapter 6. Installing RHEL AI on Azure There are multiple ways you can install, and deploy Red Hat Enterprise Linux AI on Azure. You can purchase RHEL AI from the Azure marketplace . You can download the RHEL AI VHD on the RHEL AI download page and convert it to an Azure image. For installing and deploying Red Hat Enterprise Linux AI on Azure using the VHD, you must first convert the RHEL AI image into an Azure image. You can then launch an instance using the Azure image and deploy RHEL AI on an Azure machine. 6.1. Converting the RHEL AI image into a Azure image To create a bootable image on Azure you must configure your Azure account, create an Azure Storage Container, and create an Azure image using the RHEL AI VHD image. Prerequisites You installed the Azure CLI on your specific machine. For more information on installing the Azure CLI, see Install the Azure CLI on Linux . You installed the AzCopy on your specific machine. For more information on installing AzCopy, see Install AzCopy on Linux . Procedure Log in to Azure by running the following command: USD az login Example output of the login USD az login A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`. [ { "cloudName": "AzureCloud", "homeTenantId": "c7b976df-89ce-42ec-b3b2-a6b35fd9c0be", "id": "79d7df51-39ec-48b9-a15e-dcf59043c84e", "isDefault": true, "managedByTenants": [], "name": "Team Name", "state": "Enabled", "tenantId": "0a873aea-428f-47bd-9120-73ce0c5cc1da", "user": { "name": "[email protected]", "type": "user" } } ] Log in with the azcopy tool using the following commands: USD keyctl new_session USD azcopy login You need to set up various Azure configurations and create your Azure Storage Container before creating the Azure image. Create an environment variable defining the location of your instance with the following command: USD az_location=eastus Create a resource group and save the name in an environment variable named az_resource_group . The following example creates a resource group named Default in the location eastus . (You can omit this step if you want to use an already existing resource group). USD az_resource_group=Default USD az group create --name USD{az_resource_group} --location USD{az_location} Create an Azure storage account and save the name in an environment variable named az_storage_account by running the following commands: USD az_storage_account=THE_NAME_OF_YOUR_STORAGE_ACCOUNT USD az storage account create \ --name USD{az_storage_account} \ --resource-group USD{az_resource_group} \ --location USD{az_location} \ --sku Standard_LRS Create your Azure Storage Container named as the environment variable az_storage_container with the following commands: USD az_storage_container=NAME_OF_MY_BUCKET USD az storage container create \ --name USD{az_storage_container} \ --account-name USD{az_storage_account} \ --public-access off You can get your Subscription ID from the Azure account list by running the following command: USD az account list --output table Create a variable named ` az_subscription_id` with your Subscription ID . USD az_subscription_id=46c08fb3-83c5-4b59-8372-bf9caf15a681 Grant azcopy write permission to user into the storage container. This example grants permission to the user [email protected] . USD az role assignment create \ --assignee [email protected] \ --role "Storage Blob Data Contributor" \ --scope /subscriptions/USD{az_subscription_id}/resourceGroups/USD{az_resource_group}/providers/Microsoft.Storage/storageAccounts/USD{az_storage_account}/blobServices/default/containers/USD{az_storage_container} Now that your Azure storage container is set up, you need to download the Azure VHD image from Red Hat Enterprise Linux AI download page . Set the name you want to use as the RHEL AI Azure image. USD image_name=rhel-ai-1.4 Upload the VHD file to the Azure Storage Container by running the following command: USD az_vhd_url="https://USD{az_storage_account}.blob.core.windows.net/USD{az_storage_container}/USD(basename USD{vhd_file})" USD azcopy copy "USDvhd_file" "USDaz_vhd_url" Create an Azure image from the VHD file you just uploaded with the following command: USD az image create --resource-group USDaz_resource_group \ --name "USDimage_name" \ --source "USD{az_vhd_url}" \ --location USD{az_location} \ --os-type Linux \ --hyper-v-generation V2 6.2. Deploying your instance on Azure using the CLI You can launch an instance with your new RHEL AI Azure image from the Azure web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an Azure instance with the custom Azure image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI Azure image. For more information, see "Converting the RHEL AI image to an Azure image". You installed the Azure CLI on your specific machine, see Install the Azure CLI on Linux . Procedure Log in to your Azure account by running the following command: USD az login You need to select the instance profile that you want to use for the deployment. List all the profiles in the desired region by running the following command: USD az vm list-sizes --location <region> --output table Make a note of your preferred instance profile, you will need it for your instance deployment. You can now start creating your Azure instance. Populate environment variables for when you create the instance. name=my-rhelai-instance az_location=eastus az_resource_group=my_resource_group az_admin_username=azureuser az_vm_size=Standard_ND96isr_H100_v5 az_image=my-custom-rhelai-image sshpubkey=USDHOME/.ssh/id_rsa.pub disk_size=1024 You can launch your instance, by running the following command: USD az vm create \ --resource-group USDaz_resource_group \ --name USD{name} \ --image USD{az_image} \ --size USD{az_vm_size} \ --location USD{az_location} \ --admin-username USD{az_admin_username} \ --ssh-key-values @USDsshpubkey \ --authentication-type ssh \ --nic-delete-option Delete \ --accelerated-networking true \ --os-disk-size-gb 1024 \ --os-disk-name USD{name}-USD{az_location} Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train
[ "az login", "az login A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`. [ { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"c7b976df-89ce-42ec-b3b2-a6b35fd9c0be\", \"id\": \"79d7df51-39ec-48b9-a15e-dcf59043c84e\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Team Name\", \"state\": \"Enabled\", \"tenantId\": \"0a873aea-428f-47bd-9120-73ce0c5cc1da\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "keyctl new_session azcopy login", "az_location=eastus", "az_resource_group=Default az group create --name USD{az_resource_group} --location USD{az_location}", "az_storage_account=THE_NAME_OF_YOUR_STORAGE_ACCOUNT", "az storage account create --name USD{az_storage_account} --resource-group USD{az_resource_group} --location USD{az_location} --sku Standard_LRS", "az_storage_container=NAME_OF_MY_BUCKET az storage container create --name USD{az_storage_container} --account-name USD{az_storage_account} --public-access off", "az account list --output table", "az_subscription_id=46c08fb3-83c5-4b59-8372-bf9caf15a681", "az role assignment create --assignee [email protected] --role \"Storage Blob Data Contributor\" --scope /subscriptions/USD{az_subscription_id}/resourceGroups/USD{az_resource_group}/providers/Microsoft.Storage/storageAccounts/USD{az_storage_account}/blobServices/default/containers/USD{az_storage_container}", "image_name=rhel-ai-1.4", "az_vhd_url=\"https://USD{az_storage_account}.blob.core.windows.net/USD{az_storage_container}/USD(basename USD{vhd_file})\" azcopy copy \"USDvhd_file\" \"USDaz_vhd_url\"", "az image create --resource-group USDaz_resource_group --name \"USDimage_name\" --source \"USD{az_vhd_url}\" --location USD{az_location} --os-type Linux --hyper-v-generation V2", "az login", "az vm list-sizes --location <region> --output table", "name=my-rhelai-instance az_location=eastus az_resource_group=my_resource_group az_admin_username=azureuser az_vm_size=Standard_ND96isr_H100_v5 az_image=my-custom-rhelai-image sshpubkey=USDHOME/.ssh/id_rsa.pub disk_size=1024", "az vm create --resource-group USDaz_resource_group --name USD{name} --image USD{az_image} --size USD{az_vm_size} --location USD{az_location} --admin-username USD{az_admin_username} --ssh-key-values @USDsshpubkey --authentication-type ssh --nic-delete-option Delete --accelerated-networking true --os-disk-size-gb 1024 --os-disk-name USD{name}-USD{az_location}", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/installing/installing_azure
Chapter 4. Exporting vulnerability data as JSON, CSV, or PDF file
Chapter 4. Exporting vulnerability data as JSON, CSV, or PDF file The vulnerability service enables you to export data for CVEs on systems in your RHEL infrastructure. After applying filters in the vulnerability service to view a specific set of CVEs or systems, you can export data based on those criteria. These reports are accessible through the Red Hat Insights for Red Hat Enterprise Linux application and can be exported and downloaded as .csv, .json, or PDF files. 4.1. Exporting CVE data from the vulnerability service Perform the following steps to export select data from the vulnerability service. Procedure Navigate to the Security > Vulnerability > CVEs page and log in if necessary. Apply filters and use the sorting functionality at the top of each column to locate specific CVEs. Above the list of CVEs and to the right of the Filters menu, click the Export icon, , and select Export to JSON , Export to CSV , or Export as PDF based on your download preferences. Select a download location and click Save .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports/vuln-export-data
Chapter 284. RMI Component
Chapter 284. RMI Component Available as of Camel version 1.0 The rmi: component binds Exchanges to the RMI protocol (JRMP). Since this binding is just using RMI, normal RMI rules still apply regarding what methods can be invoked. This component supports only Exchanges that carry a method invocation from an interface that extends the Remote interface. All parameters in the method should be either Serializable or Remote objects. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rmi</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 284.1. URI format rmi://rmi-regisitry-host:rmi-registry-port/registry-path[?options] For example: rmi://localhost:1099/path/to/service You can append query options to the URI in the following format, ?option=value&option=value&... 284.2. Options The RMI component has no options. The RMI endpoint is configured using URI syntax: with the following path and query parameters: 284.2.1. Path Parameters (3 parameters): Name Description Default Type hostname Hostname of RMI server localhost String name Required Name to use when binding to RMI server String port Port number of RMI server 1099 int 284.2.2. Query Parameters (6 parameters): Name Description Default Type method (common) You can set the name of the method to invoke. String remoteInterfaces (common) To specific the remote interfaces. List bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 284.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.rmi.enabled Enable rmi component true Boolean camel.component.rmi.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 284.4. Using To call out to an existing RMI service registered in an RMI registry, create a route similar to the following: from("pojo:foo").to("rmi://localhost:1099/foo"); To bind an existing camel processor or service in an RMI registry, define an RMI endpoint as follows: RmiEndpoint endpoint= (RmiEndpoint) endpoint("rmi://localhost:1099/bar"); endpoint.setRemoteInterfaces(ISay.class); from(endpoint).to("pojo:bar"); Note that when binding an RMI consumer endpoint, you must specify the Remote interfaces exposed. In XML DSL you can do as follows from Camel 2.7 onwards: <camel:route> <from uri="rmi://localhost:37541/helloServiceBean?remoteInterfaces=org.apache.camel.example.osgi.HelloService"/> <to uri="bean:helloServiceBean"/> </camel:route> 284.5. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rmi</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "rmi://rmi-regisitry-host:rmi-registry-port/registry-path[?options]", "rmi://localhost:1099/path/to/service", "rmi:hostname:port/name", "from(\"pojo:foo\").to(\"rmi://localhost:1099/foo\");", "RmiEndpoint endpoint= (RmiEndpoint) endpoint(\"rmi://localhost:1099/bar\"); endpoint.setRemoteInterfaces(ISay.class); from(endpoint).to(\"pojo:bar\");", "<camel:route> <from uri=\"rmi://localhost:37541/helloServiceBean?remoteInterfaces=org.apache.camel.example.osgi.HelloService\"/> <to uri=\"bean:helloServiceBean\"/> </camel:route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/rmi-component
Chapter 12. bgp
Chapter 12. bgp This chapter describes the commands under the bgp command. 12.1. bgp dragent add speaker Add a BGP speaker to a dynamic routing agent Usage: Table 12.1. Positional Arguments Value Summary <agent-id> Id of the dynamic routing agent <bgp-speaker> Id or name of the bgp speaker Table 12.2. Optional Arguments Value Summary -h, --help Show this help message and exit 12.2. bgp dragent remove speaker Removes a BGP speaker from a dynamic routing agent Usage: Table 12.3. Positional Arguments Value Summary <agent-id> Id of the dynamic routing agent <bgp-speaker> Id or name of the bgp speaker Table 12.4. Optional Arguments Value Summary -h, --help Show this help message and exit 12.3. bgp peer create Create a BGP peer Usage: Table 12.5. Positional Arguments Value Summary <name> Name of the bgp peer to create Table 12.6. Optional Arguments Value Summary -h, --help Show this help message and exit --peer-ip <peer-ip-address> Peer ip address --remote-as <peer-remote-as> Peer as number. (integer in [1, 65535] is allowed) --auth-type <peer-auth-type> Authentication algorithm. supported algorithms: none (default), md5 --password <auth-password> Authentication password --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 12.7. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 12.8. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.9. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 12.10. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.4. bgp peer delete Delete a BGP peer Usage: Table 12.11. Positional Arguments Value Summary <bgp-peer> Bgp peer to delete (name or id) Table 12.12. Optional Arguments Value Summary -h, --help Show this help message and exit 12.5. bgp peer list List BGP peers Usage: Table 12.13. Optional Arguments Value Summary -h, --help Show this help message and exit Table 12.14. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 12.15. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 12.16. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.17. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.6. bgp peer set Update a BGP peer Usage: Table 12.18. Positional Arguments Value Summary <bgp-peer> Bgp peer to update (name or id) Table 12.19. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME Updated name of the bgp peer --password <auth-password> Updated authentication password 12.7. bgp peer show Show information for a BGP peer Usage: Table 12.20. Positional Arguments Value Summary <bgp-peer> Bgp peer to display (name or id) Table 12.21. Optional Arguments Value Summary -h, --help Show this help message and exit Table 12.22. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 12.23. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.24. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 12.25. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.8. bgp speaker add network Add a network to a BGP speaker Usage: Table 12.26. Positional Arguments Value Summary <bgp-speaker> Bgp speaker (name or id) <network> Network to add (name or id) Table 12.27. Optional Arguments Value Summary -h, --help Show this help message and exit 12.9. bgp speaker add peer Add a peer to a BGP speaker Usage: Table 12.28. Positional Arguments Value Summary <bgp-speaker> Bgp speaker (name or id) <bgp-peer> Bgp peer to add (name or id) Table 12.29. Optional Arguments Value Summary -h, --help Show this help message and exit 12.10. bgp speaker create Create a BGP speaker Usage: Table 12.30. Positional Arguments Value Summary <name> Name of the bgp speaker to create Table 12.31. Optional Arguments Value Summary -h, --help Show this help message and exit --local-as <local-as> Local as number. (integer in [1, 65535] is allowed.) --ip-version {4,6} Ip version for the bgp speaker (default is 4) --advertise-floating-ip-host-routes Enable the advertisement of floating ip host routes by the BGP speaker. (default) --no-advertise-floating-ip-host-routes Disable the advertisement of floating ip host routes by the BGP speaker. --advertise-tenant-networks Enable the advertisement of tenant network routes by the BGP speaker. (default) --no-advertise-tenant-networks Disable the advertisement of tenant network routes by the BGP speaker. --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 12.32. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 12.33. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.34. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 12.35. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.11. bgp speaker delete Delete a BGP speaker Usage: Table 12.36. Positional Arguments Value Summary <bgp-speaker> Bgp speaker to delete (name or id) Table 12.37. Optional Arguments Value Summary -h, --help Show this help message and exit 12.12. bgp speaker list advertised routes List routes advertised Usage: Table 12.38. Positional Arguments Value Summary <bgp-speaker> Bgp speaker (name or id) Table 12.39. Optional Arguments Value Summary -h, --help Show this help message and exit Table 12.40. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 12.41. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 12.42. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.43. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.13. bgp speaker list List BGP speakers Usage: Table 12.44. Optional Arguments Value Summary -h, --help Show this help message and exit --agent <agent-id> List bgp speakers hosted by an agent (id only) Table 12.45. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 12.46. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 12.47. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.48. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.14. bgp speaker remove network Remove a network from a BGP speaker Usage: Table 12.49. Positional Arguments Value Summary <bgp-speaker> Bgp speaker (name or id) <network> Network to remove (name or id) Table 12.50. Optional Arguments Value Summary -h, --help Show this help message and exit 12.15. bgp speaker remove peer Remove a peer from a BGP speaker Usage: Table 12.51. Positional Arguments Value Summary <bgp-speaker> Bgp speaker (name or id) <bgp-peer> Bgp peer to remove (name or id) Table 12.52. Optional Arguments Value Summary -h, --help Show this help message and exit 12.16. bgp speaker set Set BGP speaker properties Usage: Table 12.53. Positional Arguments Value Summary <bgp-speaker> Bgp speaker to update (name or id) Table 12.54. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME Name of the bgp speaker to update --advertise-floating-ip-host-routes Enable the advertisement of floating ip host routes by the BGP speaker. (default) --no-advertise-floating-ip-host-routes Disable the advertisement of floating ip host routes by the BGP speaker. --advertise-tenant-networks Enable the advertisement of tenant network routes by the BGP speaker. (default) --no-advertise-tenant-networks Disable the advertisement of tenant network routes by the BGP speaker. 12.17. bgp speaker show dragents List dynamic routing agents hosting a BGP speaker Usage: Table 12.55. Positional Arguments Value Summary <bgp-speaker> Id or name of the bgp speaker Table 12.56. Optional Arguments Value Summary -h, --help Show this help message and exit Table 12.57. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 12.58. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 12.59. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.60. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.18. bgp speaker show Show a BGP speaker Usage: Table 12.61. Positional Arguments Value Summary <bgp-speaker> Bgp speaker to display (name or id) Table 12.62. Optional Arguments Value Summary -h, --help Show this help message and exit Table 12.63. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 12.64. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 12.65. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 12.66. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack bgp dragent add speaker [-h] <agent-id> <bgp-speaker>", "openstack bgp dragent remove speaker [-h] <agent-id> <bgp-speaker>", "openstack bgp peer create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --peer-ip <peer-ip-address> --remote-as <peer-remote-as> [--auth-type <peer-auth-type>] [--password <auth-password>] [--project <project>] [--project-domain <project-domain>] <name>", "openstack bgp peer delete [-h] <bgp-peer>", "openstack bgp peer list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack bgp peer set [-h] [--name NAME] [--password <auth-password>] <bgp-peer>", "openstack bgp peer show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <bgp-peer>", "openstack bgp speaker add network [-h] <bgp-speaker> <network>", "openstack bgp speaker add peer [-h] <bgp-speaker> <bgp-peer>", "openstack bgp speaker create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --local-as <local-as> [--ip-version {4,6}] [--advertise-floating-ip-host-routes] [--no-advertise-floating-ip-host-routes] [--advertise-tenant-networks] [--no-advertise-tenant-networks] [--project <project>] [--project-domain <project-domain>] <name>", "openstack bgp speaker delete [-h] <bgp-speaker>", "openstack bgp speaker list advertised routes [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <bgp-speaker>", "openstack bgp speaker list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--agent <agent-id>]", "openstack bgp speaker remove network [-h] <bgp-speaker> <network>", "openstack bgp speaker remove peer [-h] <bgp-speaker> <bgp-peer>", "openstack bgp speaker set [-h] [--name NAME] [--advertise-floating-ip-host-routes] [--no-advertise-floating-ip-host-routes] [--advertise-tenant-networks] [--no-advertise-tenant-networks] <bgp-speaker>", "openstack bgp speaker show dragents [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <bgp-speaker>", "openstack bgp speaker show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <bgp-speaker>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/bgp
Chapter 13. Volumes
Chapter 13. Volumes 13.1. Creating Volumes This section shows how to create disk volumes inside a block based storage pool. In the example below, the virsh vol-create-as command will create a storage volume with a specific size in GB within the guest_images_disk storage pool. As this command is repeated per volume needed, three volumes are created as shown in the example.
[ "# virsh vol-create-as guest_images_disk volume1 8 G Vol volume1 created # virsh vol-create-as guest_images_disk volume2 8 G Vol volume2 created # virsh vol-create-as guest_images_disk volume3 8 G Vol volume3 created # virsh vol-list guest_images_disk Name Path ----------------------------------------- volume1 /dev/sdb1 volume2 /dev/sdb2 volume3 /dev/sdb3 # parted -s /dev/sdb print Model: ATA ST3500418AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 2 17.4kB 8590MB 8590MB primary 3 8590MB 17.2GB 8590MB primary 1 21.5GB 30.1GB 8590MB primary" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-storage_volumes
Windows Integration Guide
Windows Integration Guide Red Hat Enterprise Linux 7 Integrating Linux systems with Active Directory environments Florian Delehaye Red Hat Customer Content Services [email protected] Marc Muehlfeld Red Hat Customer Content Services Filip Hanzelka Red Hat Customer Content Services Lucie Manaskova Red Hat Customer Content Services Aneta Steflova Petrova Red Hat Customer Content Services Tomas Capek Red Hat Customer Content Services Ella Deon Ballard Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/index
Chapter 4. Troubleshooting
Chapter 4. Troubleshooting Use this information to diagnose and resolve issues during backup and recovery. 4.1. Automation controller custom resource has the same name as an existing deployment The name specified for the new AutomationController custom resource must not match an existing deployment or the recovery process will fail. If your AutomationController customer resource matches an existing deployment, perform the following steps to resolve the issue. Procedure Delete the existing AutomationController and the associated postgres PVC: oc delete automationcontroller <YOUR_DEPLOYMENT_NAME> -n <YOUR_NAMESPACE> oc delete pvc postgres-13-<YOUR_DEPLOYMENT_NAME>-13-0 -n <YOUR_NAMESPACE> Use AutomationControllerRestore with the same deployment_name in it: oc apply -f restore.yaml
[ "delete automationcontroller <YOUR_DEPLOYMENT_NAME> -n <YOUR_NAMESPACE> delete pvc postgres-13-<YOUR_DEPLOYMENT_NAME>-13-0 -n <YOUR_NAMESPACE>", "apply -f restore.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/aap-troubleshoot-backup-recover
Chapter 33. Uninstalling Streams for Apache Kafka
Chapter 33. Uninstalling Streams for Apache Kafka You can uninstall Streams for Apache Kafka on OpenShift 4.14 and later from the OperatorHub using the OpenShift Container Platform web console or CLI. Use the same approach you used to install Streams for Apache Kafka. When you uninstall Streams for Apache Kafka, you will need to identify resources created specifically for a deployment and referenced from the Streams for Apache Kafka resource. Such resources include: Secrets (Custom CAs and certificates, Kafka Connect secrets, and other Kafka secrets) Logging ConfigMaps (of type external ) These are resources referenced by Kafka , KafkaConnect , KafkaMirrorMaker , or KafkaBridge configuration. Warning Deleting CRDs and related custom resources When a CustomResourceDefinition is deleted, custom resources of that type are also deleted. This includes the Kafka , KafkaConnect , KafkaMirrorMaker , and KafkaBridge resources managed by Streams for Apache Kafka, as well as the StrimziPodSet resource Streams for Apache Kafka uses to manage the pods of the Kafka components. In addition, any OpenShift resources created by these custom resources, such as Deployment , Pod , Service , and ConfigMap resources, are also removed. Always exercise caution when deleting these resources to avoid unintended data loss. 33.1. Uninstalling Streams for Apache Kafka from the OperatorHub This procedure describes how to uninstall Streams for Apache Kafka from the OperatorHub and remove resources related to the deployment. You can perform the steps from the console or use alternative CLI commands. Prerequisites Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled Streams for Apache Kafka. Command to find resources related to a Streams for Apache Kafka deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Navigate in the OpenShift web console to Operators > Installed Operators . For the installed Streams for Apache Kafka operator, select the options icon (three vertical dots) and click Uninstall Operator . The operator is removed from Installed Operators . Navigate to Home > Projects and select the project where you installed Streams for Apache Kafka and the Kafka components. Click the options under Inventory to delete related resources. Resources include the following: Deployments StatefulSets Pods Services ConfigMaps Secrets Tip Use the search to find related resources that begin with the name of the Kafka cluster. You can also find the resources under Workloads . Alternative CLI commands You can use CLI commands to uninstall Streams for Apache Kafka from the OperatorHub. Delete the Streams for Apache Kafka subscription. oc delete subscription amq-streams -n openshift-operators Delete the cluster service version (CSV). oc delete csv amqstreams. <version> -n openshift-operators Remove related CRDs. oc get crd -l app=strimzi -o name | xargs oc delete 33.2. Uninstalling Streams for Apache Kafka using the CLI This procedure describes how to use the oc command-line tool to uninstall Streams for Apache Kafka and remove resources related to the deployment. Prerequisites Access to an OpenShift cluster using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled Streams for Apache Kafka. Command to find resources related to a Streams for Apache Kafka deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Delete the Cluster Operator Deployment , related CustomResourceDefinitions , and RBAC resources. Specify the installation files used to deploy the Cluster Operator. oc delete -f install/cluster-operator Delete the resources you identified in the prerequisites. oc delete <resource_type> <resource_name> -n <namespace> Replace <resource_type> with the type of resource you are deleting and <resource_name> with the name of the resource. Example to delete a secret oc delete secret my-cluster-clients-ca-cert -n my-project
[ "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete subscription amq-streams -n openshift-operators", "delete csv amqstreams. <version> -n openshift-operators", "get crd -l app=strimzi -o name | xargs oc delete", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete -f install/cluster-operator", "delete <resource_type> <resource_name> -n <namespace>", "delete secret my-cluster-clients-ca-cert -n my-project" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-uninstalling-str
Chapter 1. Installation methods
Chapter 1. Installation methods You can install OpenShift Container Platform on IBM Power(R) Virtual Server using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Power(R) Virtual Server using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Power(R) Virtual Server infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Power(R) Virtual Server : You can install a customized cluster on IBM Power(R) Virtual Server infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Power(R) Virtual Server into an existing VPC : You can install OpenShift Container Platform on IBM Power(R) Virtual Server into an existing Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on IBM Power(R) Virtual Server : You can install a private cluster on IBM Power(R) Virtual Server. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on IBM Power(R) Virtual Server in a restricted network : You can install OpenShift Container Platform on IBM Power(R) Virtual Server on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. 1.2. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on IBM Power(R) Virtual Server, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys 1.3. steps Configuring an IBM Cloud(R) account
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_power_virtual_server/preparing-to-install-on-ibm-power-vs
Chapter 32. Performing cluster maintenance
Chapter 32. Performing cluster maintenance In order to perform maintenance on the nodes of your cluster, you may need to stop or move the resources and services running on that cluster. Or you may need to stop the cluster software while leaving the services untouched. Pacemaker provides a variety of methods for performing system maintenance. If you need to stop a node in a cluster while continuing to provide the services running on that cluster on another node, you can put the cluster node in standby mode. A node that is in standby mode is no longer able to host resources. Any resource currently active on the node will be moved to another node, or stopped if no other node is eligible to run the resource. For information about standby mode, see Putting a node into standby mode . If you need to move an individual resource off the node on which it is currently running without stopping that resource, you can use the pcs resource move command to move the resource to a different node. When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. When you are ready to move the resource back, you can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the original node, however, since where the resources can run at that point depends on how you have configured your resources initially. You can relocate a resource to its preferred node with the pcs resource relocate run command. If you need to stop a running resource entirely and prevent the cluster from starting it again, you can use the pcs resource disable command. For information on the pcs resource disable command, see Disabling, enabling, and banning cluster resources . If you want to prevent Pacemaker from taking any action for a resource (for example, if you want to disable recovery actions while performing maintenance on the resource, or if you need to reload the /etc/sysconfig/pacemaker settings), use the pcs resource unmanage command, as described in Setting a resource to unmanaged mode . Pacemaker Remote connection resources should never be unmanaged. If you need to put the cluster in a state where no services will be started or stopped, you can set the maintenance-mode cluster property. Putting the cluster into maintenance mode automatically unmanages all resources. For information about putting the cluster in maintenance mode, see Putting a cluster in maintenance mode . If you need to update the packages that make up the RHEL High Availability and Resilient Storage Add-Ons, you can update the packages on one node at a time or on the entire cluster as a whole, as summarized in Updating a RHEL high availability cluster . If you need to perform maintenance on a Pacemaker remote node, you can remove that node from the cluster by disabling the remote node resource, as described in Upgrading remote nodes and guest nodes . If you need to migrate a VM in a RHEL cluster, you will first need to stop the cluster services on the VM to remove the node from the cluster and then start the cluster back up after performing the migration. as described in Migrating VMs in a RHEL cluster . 32.1. Putting a node into standby mode When a cluster node is in standby mode, the node is no longer able to host resources. Any resources currently active on the node will be moved to another node. The following command puts the specified node into standby mode. If you specify the --all , this command puts all nodes into standby mode. You can use this command when updating a resource's packages. You can also use this command when testing a configuration, to simulate recovery without actually shutting down a node. The following command removes the specified node from standby mode. After running this command, the specified node is then able to host resources. If you specify the --all , this command removes all nodes from standby mode. Note that when you execute the pcs node standby command, this prevents resources from running on the indicated node. When you execute the pcs node unstandby command, this allows resources to run on the indicated node. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. 32.2. Manually moving cluster resources You can override the cluster and force resources to move from their current location. There are two occasions when you would want to do this: When a node is under maintenance, and you need to move all resources running on that node to a different node When individually specified resources needs to be moved To move all resources running on a node to a different node, you put the node in standby mode. You can move individually specified resources in either of the following ways. You can use the pcs resource move command to move a resource off a node on which it is currently running. You can use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. 32.2.1. Moving a resource from its current node To move a resource off the node on which it is currently running, use the following command, specifying the resource_id of the resource as defined. Specify the destination_node if you want to indicate on which node to run the resource that you are moving. Note When you run the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. As of RHEL 8.6, you can specify the --autodelete option for this command, which will cause the location constraint that this command creates to be removed automatically once the resource has been moved. For earlier releases, you can run the pcs resource clear or the pcs constraint delete command to remove the constraint manually. Removing the constraint does not necessarily move the resources back to the original node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --master parameter of the pcs resource move command, the constraint applies only to promoted instances of the resource. You can optionally configure a lifetime parameter for the pcs resource move command to indicate a period of time the constraint should remain. You specify the units of a lifetime parameter according to the format defined in ISO 8601, which requires that you specify the unit as a capital letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes), and S (for seconds). To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating the value in minutes. For example, a lifetime parameter of 5M indicates an interval of five months, while a lifetime parameter of PT5M indicates an interval of five minutes. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for one hour and thirty minutes. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for thirty minutes. 32.2.2. Moving a resource to its preferred node After a resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. To relocate resources to their preferred node, use the following command. A preferred node is determined by the current cluster status, constraints, resource location, and other settings and may change over time. If you do not specify any resources, all resource are relocated to their preferred nodes. This command calculates the preferred node for each resource while ignoring resource stickiness. After calculating the preferred node, it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved, the constraints are deleted automatically. To remove all constraints created by the pcs resource relocate run command, you can enter the pcs resource relocate clear command. To display the current status of resources and their optimal node ignoring resource stickiness, enter the pcs resource relocate show command. 32.3. Disabling, enabling, and banning cluster resources In addition to the pcs resource move and pcs resource relocate commands, there are a variety of other commands you can use to control the behavior of cluster resources. Disabling a cluster resource You can manually stop a running resource and prevent the cluster from starting it again with the following command. Depending on the rest of the configuration (constraints, options, failures, and so on), the resource may remain started. If you specify the --wait option, pcs will wait up to 'n' seconds for the resource to stop and then return 0 if the resource is stopped or 1 if the resource has not stopped. If 'n' is not specified it defaults to 60 minutes. As of RHEL 8.2, you can specify that a resource be disabled only if disabling the resource would not have an effect on other resources. Ensuring that this would be the case can be impossible to do by hand when complex resource relations are set up. The pcs resource disable --simulate command shows the effects of disabling a resource while not changing the cluster configuration. The pcs resource disable --safe command disables a resource only if no other resources would be affected in any way, such as being migrated from one node to another. The pcs resource safe-disable command is an alias for the pcs resource disable --safe command. The pcs resource disable --safe --no-strict command disables a resource only if no other resources would be stopped or demoted As of RHEL 8.5 you can specify the --brief option for the pcs resource disable --safe command to print errors only. Also as of RHEL 8.5, the error report that the pcs resource disable --safe command generates if the safe disable operation fails contains the affected resource IDs. If you need to know only the resource IDs of resources that would be affected by disabling a resource, use the --brief option, which does not provide the full simulation result. Enabling a cluster resource Use the following command to allow the cluster to start a resource. Depending on the rest of the configuration, the resource may remain stopped. If you specify the --wait option, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started or 1 if the resource has not started. If 'n' is not specified it defaults to 60 minutes. Preventing a resource from running on a particular node Use the following command to prevent a resource from running on a specified node, or on the current node if no node is specified. Note that when you execute the pcs resource ban command, this adds a -INFINITY location constraint to the resource to prevent it from running on the indicated node. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --master parameter of the pcs resource ban command, the scope of the constraint is limited to the master role and you must specify master_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource ban command to indicate a period of time the constraint should remain. You can optionally configure a --wait[= n ] parameter for the pcs resource ban command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. Forcing a resource to start on the current node Use the debug-start parameter of the pcs resource command to force a specified resource to start on the current node, ignoring the cluster recommendations and printing the output from starting the resource. This is mainly used for debugging resources; starting resources on a cluster is (almost) always done by Pacemaker and not directly with a pcs command. If your resource is not starting, it is usually due to either a misconfiguration of the resource (which you debug in the system log), constraints that prevent the resource from starting, or the resource being disabled. You can use this command to test resource configuration, but it should not normally be used to start resources in a cluster. The format of the debug-start command is as follows. 32.4. Setting a resource to unmanaged mode When a resource is in unmanaged mode, the resource is still in the configuration but Pacemaker does not manage the resource. The following command sets the indicated resources to unmanaged mode. The following command sets resources to managed mode, which is the default state. You can specify the name of a resource group with the pcs resource manage or pcs resource unmanage command. The command will act on all of the resources in the group, so that you can set all of the resources in a group to managed or unmanaged mode with a single command and then manage the contained resources individually. 32.5. Putting a cluster in maintenance mode When a cluster is in maintenance mode, the cluster does not start or stop any services until told otherwise. When maintenance mode is completed, the cluster does a sanity check of the current state of any services, and then stops or starts any that need it. To put a cluster in maintenance mode, use the following command to set the maintenance-mode cluster property to true . To remove a cluster from maintenance mode, use the following command to set the maintenance-mode cluster property to false . You can remove a cluster property from the configuration with the following command. Alternately, you can remove a cluster property from a configuration by leaving the value field of the pcs property set command blank. This restores that property to its default value. For example, if you have previously set the symmetric-cluster property to false , the following command removes the value you have set from the configuration and restores the value of symmetric-cluster to true , which is its default value. 32.6. Updating a RHEL high availability cluster Updating packages that make up the RHEL High Availability and Resilient Storage Add-Ons, either individually or as a whole, can be done in one of two general ways: Rolling Updates : Remove one node at a time from service, update its software, then integrate it back into the cluster. This allows the cluster to continue providing service and managing resources while each node is updated. Entire Cluster Update : Stop the entire cluster, apply updates to all nodes, then start the cluster back up. Warning It is critical that when performing software update procedures for Red Hat Enterprise Linux High Availability and Resilient Storage clusters, you ensure that any node that will undergo updates is not an active member of the cluster before those updates are initiated. For a full description of each of these methods and the procedures to follow for the updates, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster . 32.7. Upgrading remote nodes and guest nodes If the pacemaker_remote service is stopped on an active remote node or guest node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource's monitor timeout, the cluster will consider the monitor operation as failed. If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote . Procedure Stop the node's connection resource with the pcs resource disable resourcename command, which will move all services off the node. The connection resource would be the ocf:pacemaker:remote resource for a remote node or, commonly, the ocf:heartbeat:VirtualDomain resource for a guest node. For guest nodes, this command will also stop the VM, so the VM must be started outside the cluster (for example, using virsh ) to perform any maintenance. Perform the required maintenance. When ready to return the node to the cluster, re-enable the resource with the pcs resource enable command. 32.8. Migrating VMs in a RHEL cluster Red Hat does not support live migration of active cluster nodes across hypervisors or hosts, as noted in Support Policies for RHEL High Availability Clusters - General Conditions with Virtualized Cluster Members . If you need to perform a live migration, you will first need to stop the cluster services on the VM to remove the node from the cluster, and then start the cluster back up after performing the migration. The following steps outline the procedure for removing a VM from a cluster, migrating the VM, and restoring the VM to the cluster. The following steps outline the procedure for removing a VM from a cluster, migrating the VM, and restoring the VM to the cluster. This procedure applies to VMs that are used as full cluster nodes, not to VMs managed as cluster resources (including VMs used as guest nodes) which can be live-migrated without special precautions. For general information about the fuller procedure required for updating packages that make up the RHEL High Availability and Resilient Storage Add-Ons, either individually or as a whole, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster . Note Before performing this procedure, consider the effect on cluster quorum of removing a cluster node. For example, if you have a three-node cluster and you remove one node, your cluster can not withstand any node failure. This is because if one node of a three-node cluster is already down, removing a second node will lose quorum. Procedure If any preparations need to be made before stopping or moving the resources or software running on the VM to migrate, perform those steps. Run the following command on the VM to stop the cluster software on the VM. Perform the live migration of the VM. Start cluster services on the VM. 32.9. Identifying clusters by UUID As of Red Hat Enterprise Linux 8.7, when you create a cluster it has an associated UUID. Since a cluster name is not a unique cluster identifier, a third-party tool such as a configuration management database that manages multiple clusters with the same name can uniquely identify a cluster by means of its UUID. You can display the current cluster UUID with the pcs cluster config [show] command, which includes the cluster UUID in its output. To add a UUID to an existing cluster, run the following command. To regenerate a UUID for a cluster with an existing UUID, run the following command.
[ "pcs node standby node | --all", "pcs node unstandby node | --all", "pcs resource move resource_id [ destination_node ] [--master] [lifetime= lifetime ]", "pcs resource move resource1 example-node2 lifetime=PT1H30M", "pcs resource move resource1 example-node2 lifetime=PT30M", "pcs resource relocate run [ resource1 ] [ resource2 ]", "pcs resource disable resource_id [--wait[= n ]]", "pcs resource enable resource_id [--wait[= n ]]", "pcs resource ban resource_id [ node ] [--master] [lifetime= lifetime ] [--wait[= n ]]", "pcs resource debug-start resource_id", "pcs resource unmanage resource1 [ resource2 ]", "pcs resource manage resource1 [ resource2 ]", "pcs property set maintenance-mode=true", "pcs property set maintenance-mode=false", "pcs property unset property", "pcs property set symmetric-cluster=", "pcs resource disable resourcename", "pcs resource enable resourcename", "pcs cluster stop", "pcs cluster start", "pcs cluster config uuid generate", "pcs cluster config uuid generate --force" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_cluster-maintenance-configuring-and-managing-high-availability-clusters
Chapter 4. Policy Parameters
Chapter 4. Policy Parameters These parameters allow you to set policies on a per-service basis. Parameter Description BarbicanPolicies A hash of policies to configure for OpenStack Key Manager (barbican). CinderApiPolicies A hash of policies to configure for OpenStack Block Storage (cinder) API. GlanceApiPolicies A hash of policies to configure for OpenStack Image Storage (glance) API. HeatApiPolicies A hash of policies to configure for OpenStack Orchestration (heat) API. IronicApiPolicies A hash of policies to configure for OpenStack Bare Metal (ironic) API. KeystonePolicies A hash of policies to configure for OpenStack Identity (keystone). NeutronApiPolicies A hash of policies to configure for OpenStack Networking (neutron) API. NovaApiPolicies A hash of policies to configure for OpenStack Compute (nova) API. SaharaApiPolicies A hash of policies to configure for OpenStack Clustering (sahara) API.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/policy-parameters
Chapter 3. Installing the Ansible plug-ins with the Operator on OpenShift Container Platform
Chapter 3. Installing the Ansible plug-ins with the Operator on OpenShift Container Platform The following procedures describe how to install Ansible plug-ins in Red Hat Developer Hub instances on Red Hat OpenShift Container Platform using the Operator. 3.1. Prerequisites Red Hat Developer Hub installed on Red Hat OpenShift Container Platform. For Helm installation, follow the steps in the Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart section of Installing Red Hat Developer Hub on OpenShift Container Platform . For Operator installation, follow the steps in the Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator section of Installing Red Hat Developer Hub on OpenShift Container Platform . A valid subscription to Red Hat Ansible Automation Platform. An OpenShift Container Platform instance with the appropriate permissions within your project to create an application. The Red Hat Developer Hub instance can query the automation controller API. Optional: To use the integrated learning paths, you must have outbound access to developers.redhat.com. 3.2. Recommended RHDH preconfiguration Red Hat recommends performing the following initial configuration tasks in RHDH. However, you can install the Ansible plug-ins for Red Hat Developer Hub before completing these tasks. Setting up authentication in RHDH Installing and configuring RBAC in RHDH Note Red Hat provides a repository of software templates for RHDH that uses the publish:github action. To use these software templates, you must install the required GitHub dynamic plugins. 3.3. Backing up your RHDH Operator ConfigMap Before you install Ansible plug-ins for Red Hat Developer Hub, create a local copy of the ConfigMap for the RHDH Operator. You can use a section of the ConfigMap when you are populating a custom ConfigMap. Procedure Find the namespace for your RHDH Operator. When you installed the RHDH Operator, a namespace was created for it. Select Topology and look for the RHDH Operator in the Project dropdown list. The default namespace is rhdh-operator . Run the following command to make a copy of the ConfigMap for your RHDH Operator, backstage-default-config . Replace <rhdh-operator-namespace> with your RHDH Operator namespace, and <CopyOfRhdhOperatorConfig> with the filename you want to use for your copy of the RHDH Operator. USD oc get configmap backstage-default-config -n <rhdh-operator-namespace> -o yaml > <CopyOfRhdhOperatorConfig> 3.4. Creating a custom Operator ConfigMap Create a custom ConfigMap, for instance rhdh-custom-config , for your project. For more details about creating a custom ConfigMap, see the Adding a custom application configuration file to OpenShift Container Platform using the Operator in the Administration guide for Red Hat Developer Hub . Populate the ConfigMap with YAML from the backup that you made of the RHDH Operator ConfigMap. Prerequisites You have saved a backup copy of the Configmap for the RHDH Operator. Procedure In the OpenShift web console, navigate to the project you created. Click ConfigMaps in the navigation pane. Click Create ConfigMap . Replace the default YAML code in the new ConfigMap with the following code: apiVersion: v1 kind: ConfigMap metadata: name: rhdh-custom-config data: deployment.yaml: |- # Replace with RHDH Operator ConfigMap deployment.yaml block here Copy the deployment.yaml: section from your local copy of the RHDH Operator ConfigMap. Paste the deployment.yaml: section into the rhdh-custom-config ConfigMap, replacing the deployment.yaml: line. Add a sidecar container ( ansible-devtools-server ) to the list of containers under resources in the deployment.spec.template.spec.[containers] block of the ConfigMap: spec: replicas: 1 selector: matchLabels: rhdh.redhat.com/app: template: metadata: labels: rhdh.redhat.com/app: spec:\ ... containers: - name: backstage-backend ... - resources: {} # Add sidecar container for Ansible plug-ins terminationMessagePath: /dev/termination-log name: ansible-devtools-server command: - adt - server ports: - containerPort: 8000 protocol: TCP imagePullPolicy: IfNotPresent terminationMessagePolicy: File image: 'ghcr.io/ansible/community-ansible-dev-tools:latest' Click Create to create the ConfigMap. Verification To view your new ConfigMap, click ConfigMaps in the navigation pane. 3.5. Adding the rhdh-custom-config file to the RHDH Operator Custom Resource Update the RHDH Operator Custom Resource to add the rhdh-custom-config file. In the OpenShift console, select the Topology view. Click More actions ... on the RHDH Operator Custom Resource and select Edit backstage to edit the Custom Resource. Add a rawRuntimeConfig: block for your custom ConfigMap spec: block. It must have the same indentation level as the spec.application: block. spec: application: ... database: ... rawRuntimeConfig: backstageConfig: rhdh-custom-config Click Save . The RHDH Operator redeploys the pods to reflect the updated Custom Resource. 3.6. Downloading the Ansible plug-ins files Download the latest .tar file for the plug-ins from the Red Hat Ansible Automation Platform Product Software downloads page . The format of the filename is ansible-backstage-rhaap-bundle-x.y.z.tar.gz . Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Create a directory on your local machine to store the .tar files. USD mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme> Set an environment variable ( USDDYNAMIC_PLUGIN_ROOT_DIR ) to represent the directory path. USD export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme> Extract the ansible-backstage-rhaap-bundle-<version-number>.tar.gz contents to USDDYNAMIC_PLUGIN_ROOT_DIR . USD tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Verification Run ls to verify that the extracted files are in the USDDYNAMIC_PLUGIN_ROOT_DIR directory: USD ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity The files with the .integrity file type contain the plugin SHA value. The SHA value is used during the plug-in configuration. 3.7. Creating a registry for the Ansible plug-ins Set up a registry in your OpenShift cluster to host the Ansible plug-ins and make them available for installation in Red Hat Developer Hub (RHDH). Procedure Log in to your OpenShift Container Platform instance with credentials to create a new application. Open your Red Hat Developer Hub OpenShift project. USD oc project <YOUR_DEVELOPER_HUB_PROJECT> Run the following commands to create a plug-in registry build in the OpenShift cluster. USD oc new-build httpd --name=plugin-registry --binary USD oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait USD oc new-app --image-stream=plugin-registry Verification To verify that the plugin-registry was deployed successfully, open the Topology view in the Developer perspective on the Red Hat Developer Hub application in the OpenShift Web console. Click the plug-in registry to view the log. (1) Developer hub instance (2) Plug-in registry Click the terminal tab and login to the container. In the terminal, run ls to confirm that the .tar files are in the plugin registry. ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz The version numbers and file names can differ. 3.8. Installing the dynamic plug-ins To install the dynamic plugins, add them to your ConfigMap for your RHDH plugin settings (for example, rhaap-dynamic-plugins-config ). If you have not already created a ConfigMap file for your RHDH plugin settings, create one by following the procedure in Adding a custom application configuration file to Red Hat OpenShift Container Platform section of the Administration guide for Red Hat Developer Hub . The example ConfigMap used in the following procedure is called rhaap-dynamic-plugins-config . Procedure Select ConfigMaps in the navigation pane of the OpenShift console. Select the rhaap-dynamic-plugins-config ConfigMap from the list. Select the YAML tab to edit the rhaap-dynamic-plugins-config ConfigMap. In the data.dynamic-plugins.yaml.plugins block, add the three dynamic plug-ins from the plug-in registry. For the integrity hash values, use the .integrity files in your USDDYNAMIC_PLUGIN_ROOT_DIR directory that correspond to each plug-in, for example use ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity for the ansible-plugin-backstage-rhaap-x.y.z.tgz plug-in. Replace x.y.z with the correct version of the plug-ins. kind: ConfigMap apiVersion: v1 metadata: name: rhaap-dynamic-plugins-config data: dynamic-plugins.yaml: | ... plugins: - disabled: false package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - ...<REDACTED> Click Save . To view the progress of the rolling restart: In the Topology view, select the deployment pod and click View logs . Select install-dynamic-plugins from the list of containers. Verification In the OpenShift console, select the Topology view. Click the Open URL icon on the deployment pod to open your Red Hat Developer Hub instance in a browser window. The Ansible plug-in is present in the navigation pane, and if you select Administration , the installed plug-ins are listed in the Plugins tab. 3.9. Adding a custom ConfigMap Create a Red Hat Developer Hub ConfigMap following the procedure in Adding a custom application configuration file to Red Hat OpenShift Container Platform in the Administration guide for Red Hat Developer Hub . The examples below use a custom ConfigMap named app-config-rhdh To edit your custom ConfigMap, log in to the OpenShift UI and navigate to Select Project ( developerHubProj ) ConfigMaps {developer-hub}-app-config EditConfigMaps app-config-rhdh . 3.10. Configuring the Ansible Dev Tools Server The creatorService URL is required for the Ansible plug-ins to provision new projects using the provided software templates. Procedure Edit your custom Red Hat Developer Hub config map, app-config-rhdh , that you created in Adding a custom ConfigMap . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh ... data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000' ... 3.11. Configuring Ansible Automation Platform details The Ansible plug-ins query your Ansible Automation Platform subscription status with the controller API using a token. Note The Ansible plug-ins continue to function regardless of the Ansible Automation Platform subscription status. Procedure Create a Personal Access Token (PAT) with "Read" scope in automation controller, following the Adding tokens section of the Automation controller user guide . Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Add your Ansible Automation Platform details to app-config-rhdh.yaml . Set the baseURL key with your automation controller URL. Set the token key with the generated token value that you created in Step 1. Set the checkSSL key to true or false . If checkSSL is set to true , the Ansible plug-ins verify whether the SSL certificate is valid. data: app-config-rhdh.yaml: | ... ansible: ... rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: true Note You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. Manage the backend authentication key like any other secret. Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable. 3.12. Adding Ansible plug-ins software templates Red Hat Ansible provides software templates for Red Hat Developer Hub to provision new playbooks and collection projects based on Ansible best practices. Procedure Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. data: app-config-rhdh.yaml: | catalog: ... locations: ... - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template] For more information, refer to the Managing templates section of the Administration guide for Red Hat Developer Hub . 3.13. Configuring Role Based Access Control Red Hat Developer Hub offers Role-based Access Control (RBAC) functionality. RBAC can then be applied to the Ansible plug-ins content. Assign the following roles: Members of the admin:superUsers group can select templates in the Create tab of the Ansible plug-ins to create playbook and collection projects. Members of the admin:users group can view templates in the Create tab of the Ansible plug-ins. The following example adds RBAC to Red Hat Developer Hub. data: app-config-rhdh.yaml: | plugins: ... permission: enabled: true rbac: admin: users: - name: user:default/<user-scm-ida> superUsers: - name: user:default/<user-admin-idb> For more information about permission policies and managing RBAC, refer to the Authorization guide for Red Hat Developer Hub. 3.14. Optional configuration for Ansible plug-ins 3.14.1. Enabling Red Hat Developer Hub authentication Red Hat Developer Hub (RHDH) provides integrations for multiple Source Control Management (SCM) systems. This is required by the plug-ins to create repositories. Refer to the Enabling authentication in Red Hat Developer Hub chapter of the Administration guide for Red Hat Developer Hub . 3.14.2. Configuring Ansible plug-ins optional integrations The Ansible plug-ins provide integrations with Ansible Automation Platform and other optional Red Hat products. To edit your custom ConfigMap, log in to the OpenShift UI and navigate to Select Project ( developerHubProj ) ConfigMaps {developer-hub}-app-config-rhdh app-config-rhdh . 3.14.2.1. Configuring OpenShift Dev Spaces When OpenShift Dev Spaces is configured for the Ansible plug-ins, users can click a link from the catalog item view in Red Hat Developer Hub and edit their provisioned Ansible Git projects using Dev Spaces. Note OpenShift Dev Spaces is a separate product and it is optional. The plug-ins will function without it. It is a separate Red Hat product and is not included in the Ansible Automation Platform or Red Hat Developer Hub subscription. If the OpenShift Dev Spaces link is not configured in the Ansible plug-ins, the Go to OpenShift Dev Spaces dashboard link in the DEVELOP section of the Ansible plug-ins landing page redirects users to the Ansible development tools home page . Prerequisites A Dev Spaces installation. Refer to the Installing Dev Spaces section of the Red Hat OpenShift Dev Spaces Administration guide . Procedure Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. data: app-config-rhdh.yaml: |- ansible: devSpaces: baseUrl: >- https://<Your OpenShift Dev Spaces URL> Replace <Your OpenShft Dev Spaces URL> with your OpenShift Dev Spaces URL. In the OpenShift Developer UI, select the Red Hat Developer Hub pod. Open Actions . Click Restart rollout . 3.14.2.2. Configuring the private automation hub URL Private automation hub provides a centralized, on-premise repository for certified Ansible collections, execution environments and any additional, vetted content provided by your organization. If the private automation hub URL is not configured in the Ansible plug-ins, users are redirected to the Red Hat Hybrid Cloud Console automation hub . Note The private automation hub configuration is optional but recommended. The Ansible plug-ins will function without it. Prerequisites: A private automation hub instance. For more information on installing private automation hub, refer to the Installation and Upgrade guides in the Ansible Automation Platform documentation. Procedure: Edit your custom Red Hat Developer Hub config map, fpr example app-config-rhdh . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. data: app-config-rhdh.yaml: |- ansible: ... automationHub: baseUrl: '<https://MyOwnPAHUrl>' ... Replace <https://MyOwnPAHUrl/> with your private automation hub URL. In the OpenShift Developer UI, select the Red Hat Developer Hub pod. Open Actions . Click Restart rollout . 3.15. Full app-config-rhdh ConfigMap example for Ansible plug-ins entries kind: ConfigMap ... metadata: name: app-config-rhdh ... data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000' rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: <true or false> # Optional integrations devSpaces: baseUrl: '<https://MyDevSpacesURL>' automationHub: baseUrl: '<https://MyPrivateAutomationHubURL>' ... catalog: locations: - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template] ...
[ "oc get configmap backstage-default-config -n <rhdh-operator-namespace> -o yaml > <CopyOfRhdhOperatorConfig>", "apiVersion: v1 kind: ConfigMap metadata: name: rhdh-custom-config data: deployment.yaml: |- # Replace with RHDH Operator ConfigMap deployment.yaml block here", "spec: replicas: 1 selector: matchLabels: rhdh.redhat.com/app: template: metadata: labels: rhdh.redhat.com/app: spec: containers: - name: backstage-backend - resources: {} # Add sidecar container for Ansible plug-ins terminationMessagePath: /dev/termination-log name: ansible-devtools-server command: - adt - server ports: - containerPort: 8000 protocol: TCP imagePullPolicy: IfNotPresent terminationMessagePolicy: File image: 'ghcr.io/ansible/community-ansible-dev-tools:latest'", "spec: application: database: rawRuntimeConfig: backstageConfig: rhdh-custom-config", "mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme>", "export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme>", "tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR", "ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity", "oc project <YOUR_DEVELOPER_HUB_PROJECT>", "oc new-build httpd --name=plugin-registry --binary oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait oc new-app --image-stream=plugin-registry", "ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz", "kind: ConfigMap apiVersion: v1 metadata: name: rhaap-dynamic-plugins-config data: dynamic-plugins.yaml: | plugins: - disabled: false package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null - disabled: false package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz integrity: <SHA512 value> # Use hash in ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - ...<REDACTED>", "kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000'", "data: app-config-rhdh.yaml: | ansible: rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: true", "data: app-config-rhdh.yaml: | catalog: locations: - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template]", "data: app-config-rhdh.yaml: | plugins: permission: enabled: true rbac: admin: users: - name: user:default/<user-scm-ida> superUsers: - name: user:default/<user-admin-idb>", "data: app-config-rhdh.yaml: |- ansible: devSpaces: baseUrl: >- https://<Your OpenShift Dev Spaces URL>", "data: app-config-rhdh.yaml: |- ansible: automationHub: baseUrl: '<https://MyOwnPAHUrl>'", "kind: ConfigMap metadata: name: app-config-rhdh data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000' rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: <true or false> # Optional integrations devSpaces: baseUrl: '<https://MyDevSpacesURL>' automationHub: baseUrl: '<https://MyPrivateAutomationHubURL>' catalog: locations: - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template]" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-install-ocp-operator_aap-plugin-rhdh-installing
1.4. Changing the Data Warehouse Sampling Scale
1.4. Changing the Data Warehouse Sampling Scale Data Warehouse is required in Red Hat Virtualization. It can be installed and configured on the same machine as the Manager, or on a separate machine with access to the Manager. The default data retention settings may not be required for all setups, so engine-setup offers two data sampling scales: Basic and Full . Full uses the default values for the data retention settings listed in Section 2.4, "Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf" (recommended when Data Warehouse is installed on a remote host). Basic reduces the values of DWH_TABLES_KEEP_HOURLY to 720 and DWH_TABLES_KEEP_DAILY to 0 , easing the load on the Manager machine. Use Basic when the Manager and Data Warehouse are installed on the same machine. The sampling scale is configured by engine-setup during installation: You can change the sampling scale later by running engine-setup again with the --reconfigure-dwh-scale option. Changing the Data Warehouse Sampling Scale You can also adjust individual data retention settings if necessary, as documented in Section 2.4, "Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf" .
[ "--== MISC CONFIGURATION ==-- Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]:", "engine-setup --reconfigure-dwh-scale [...] Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: [...] Perform full vacuum on the oVirt engine history database ovirt_engine_history@localhost? This operation may take a while depending on this setup health and the configuration of the db vacuum process. See https://www.postgresql.org/docs/9.0/static/sql-vacuum.html (Yes, No) [No]: [...] Setup can backup the existing database. The time and space required for the database backup depend on its size. This process takes time, and in some cases (for instance, when the size is few GBs) may take several hours to complete. If you choose to not back up the database, and Setup later fails for some reason, it will not be able to restore the database and all DWH data will be lost. Would you like to backup the existing database before upgrading it? (Yes, No) [Yes]: [...] Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]: 2 [...] During execution engine service will be stopped (OK, Cancel) [OK]: [...] Please confirm installation settings (OK, Cancel) [OK]:" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/Changing_the_Data_Warehouse_Sampling_Scale
10.4. CMC SharedSecret Authentication
10.4. CMC SharedSecret Authentication Use the Shared Secret feature to enable users to send unsigned CMC requests to the server. For example, this is necessary if a user wants to obtain the first signing certificate. This signing certificate can later be used to sign other certificates of this user. 10.4.1. Creating a Shared Secret Token The The Shared Secret Workflow section in the Red Hat Certificate System Planning, Installation, and Deployment Guide describes the workflow when using a Shared Secret Token. Depending on the situation, either an end entity user or an administrator creates the Shared Secret Token. Note To use the shared secret token, Certificate System must use an RSA issuance protection certificate. For details, see Enabling the CMC Shared Secret Feature section located in RHCS Planning, Installation, and Deployment Guide. To create a Shared Secret Token, enter: If you use an HSM, additionally pass the -h token_name option to the command to set the HSM security token name. For further details about the CMCSharedToken utility, see the CMCSharedToken (8) man page. Note The generated token is encrypted and only the user who generated knows the password. If a CA administrator generates the token for a user, the administrator must provide the password to the user using a secure way. After creating the Shared Token, an administrator must add the token to a user or certificate record. For details, see Section 10.4.2, "Setting a CMC Shared Secret" . 10.4.2. Setting a CMC Shared Secret Depending on the planned action, an administrator must store a Shared Secret Token after generating it in the LDAP entry of the user or certificate. For details about the workflow and when to use a Shared Secret, see the The Shared Secret Workflow section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 10.4.2.1. Adding a CMC Shared Secret to a User Entry for Certificate Enrollment To use the Shared Secret Token for certificate enrollment, store it as an administrator in the LDAP entry of the user: 10.4.2.2. Adding a CMC Shared Secret to a Certificate for Certificate Revocations To use the Shared Secret Token for certificate revocations, store it as an administrator in the LDAP entry of the certificate to be revoked:
[ "CMCSharedToken -d /home/user_name/.dogtag/ -p NSS_password -s \" CMC_enrollment_password \" -o /home/user_name/CMC_shared_token.b64 -n \" issuance_protection_certificate_nickname \"", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: uid= user_name ,ou=People,dc=example,dc=com changetype: modify replace: shrTok shrTok: base64-encoded_token", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn= certificate_id ,ou=certificateRepository,ou=ca,o= pki-tomcat-CA changetype: modify replace: shrTok shrTok: base64-encoded_token" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/cmc_sharedsecret_authentication