title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 4. Installing a three-node cluster on Nutanix
Chapter 4. Installing a three-node cluster on Nutanix In OpenShift Container Platform version 4.14, you can install a three-node cluster on Nutanix. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. 4.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... 4.2. steps Installing a cluster on Nutanix
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_nutanix/installing-nutanix-three-node
function::module_name
function::module_name Name function::module_name - The module name of the current script Synopsis Arguments None Description This function returns the name of the stap module. Either generated randomly (stap_[0-9a-f]+_[0-9a-f]+) or set by stap -m <module_name>.
[ "module_name:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-module-name
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/making-open-source-more-inclusive
Troubleshooting OpenShift Data Foundation
Troubleshooting OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 Instructions on troubleshooting OpenShift Data Foundation Red Hat Storage Documentation Team Abstract Read this document for instructions on troubleshooting Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 4, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 7, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss. Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -pc , --provider openshift-storage-client logs from a provider/consumer cluster (includes all the logs under operator namespace, pods, deployments, secrets, configmap, and other resources) -h , --help Print help message Note If no < -arg> is included, must-gather will collect all logs. Chapter 3. Using odf-cli command odf-cli command and its subcommands help to reduce repetitive tasks and provide better experience. You can download the odf-cli tool from the customer portal . 3.1. Subcommands of odf get command odf get recovery-profile Displays the recovery-profile value set for the OSD. By default, an empty value is displayed if the value is not set using the odf set recovery-profile command. After the value is set, the appropriate value is displayed. Example : odf get health Checks the health of the Ceph cluster and common configuration issues. This command checks for the following: At least three mon pods are running on different nodes Mon quorum and Ceph health details At least three OSD pods are running on different nodes The 'Running' status of all pods Placement group status At least one MGR pod is running Example : odf get dr-health In mirroring-enabled clusters, fetches the connection status of a cluster from another cluster. The cephblockpool is queried with mirroring-enabled and If not found will exit with relevant logs. Example : odf get dr-prereq Checks and fetches the status of all the prerequisites to enable Disaster Recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare current cluster configuration with the peer cluster configuration. Based on the comparison results, the status of the prerequisites is shown. Example 3.2. Subcommands of odf operator command odf operator rook set Sets the provided property value in the rook-ceph-operator config configmap Example : where, ROOK_LOG_LEVEL can be DEBUG , INFO , or WARNING odf operator rook restart Restarts the Rook-Ceph operator Example : odf restore mon-quorum Restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, the quorum needs to be restored to a remaining good mon in order to bring the Ceph cluster up again. Example : odf restore deleted <crd> Restores the deleted Rook CR when there is still data left for the components, CephClusters, CephFilesystems, and CephBlockPools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the Deleting state and cluster health is not ensured. Upgrades are blocked too. This command helps to repair the CR without the cluster downtime. Note A warning message seeking confirmation to restore appears. After confirming, you need to enter continue to start the operator and expand to the full mon-quorum again. Example: 3.3. Configuring debug verbosity of Ceph components You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values . Procedure Set log level for Ceph daemons: where ceph-subsystem can be osd , mds , or mon . For example, Chapter 4. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 4.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity. Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment When a cluster-wide default node selector is used for OpenShift Data Foundation, the pods generated by container storage interface (CSI) daemonsets are able to start only on the nodes that match the selector. To be able to use OpenShift Data Foundation from nodes which do not match the selector, override the cluster-wide default node selector by performing the following steps in the command line interface : Procedure Specify a blank node selector for the openshift-storage namespace. Delete the original pods generated by the DaemonSets. Chapter 6. Encryption token is deleted or expired Use this procedure to update the token if the encryption token for your key management system gets deleted or expires. Prerequisites Ensure that you have a new token with the same policy as the deleted or expired token Procedure Log in to OpenShift Container Platform Web Console. Click Workloads -> Secrets To update the ocs-kms-token used for cluster wide encryption: Set the Project to openshift-storage . Click ocs-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . To update the ceph-csi-kms-token for a given project or namespace with encrypted persistent volumes: Select the required Project . Click ceph-csi-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation 7.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe -> Alerting -> Firing option Home -> Overview -> Cluster tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Block and File tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persists or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in an unhealthy state. Check Ceph cluster health . Description : Cluster Object Store is in an unhealthy state for more than 15s. Check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in an unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 7.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 7.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 7.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 7.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsCacheUsageHigh Ceph metadata service (MDS) cache usage for the MDS daemon has exceeded 95% of the mds_cache_memory_limit . CephMdsCpuUsageHigh Ceph MDS CPU usage for the MDS daemon has exceeded the threshold for adequate performance. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. This impacts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. OSDCPULoadHigh CPU usage in the OSD container on a specific pod has exceeded 80%, potentially affecting the performance of the OSD. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 7.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information to free up some space. 7.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error state. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.7. CephMdsCacheUsageHigh Meaning When the storage metadata service (MDS) cannot keep its cache usage under the target threshold specified by mds_health_cache_threshold , or 150% of the cache limit set by mds_cache_memory_limit , the MDS sends a health alert to the monitors indicating the cache is too large. As a result, the MDS related operations become slow. Impact High Diagnosis The MDS tries to stay under a reservation of the mds_cache_memory_limit by trimming unused metadata in its cache and recalling cached items in the client caches. It is possible for the MDS to exceed this limit due to slow recall from clients as a result of multiple clients accesing the files. Mitigation Make sure you have enough memory provisioned for MDS cache. Memory resources for the MDS pods need to be updated in the ocs-storageCluster in order to increase the mds_cache_memory_limit . Run the following command to set the memory of MDS pods, for example, 16GB: OpenShift Data Foundation automatically sets mds_cache_memory_limit to half of the MDS pod memory limit. If the memory is set to 8GB using the command, then the operator sets the MDS cache memory limit to 4GB. 7.3.8. CephMdsCpuUsageHigh Meaning The storage metadata service (MDS) serves filesystem metadata. The MDS is crucial for any file creation, rename, deletion, and update operations. MDS by default is allocated two or three CPUs. This does not cause issues as long as there are not too many metadata operations. When the metadata operation load increases enough to trigger this alert, it means the default CPU allocation is unable to cope with load. You need to increase the CPU allocation or run multiple active MDS servers. Impact High Diagnosis Click Workloads -> Pods . Select the corresponding MDS pod and click on the Metrics tab. There you will see the allocated and used CPU. By default, the alert is fired if the used CPU is 67% of allocated CPU for 6 hours. If this is the case, follow the steps in the mitigation section. Mitigation You need to either do a vertical or a horizontal scaling of CPU. For more information, see the Description and Runbook section of the alert. Use the following command to set the number of allocated CPU for MDS, for example, 8: In order to run multiple active MDS servers, use the following command: Make sure you have enough CPU provisioned for MDS depending on the load. Important Always increase the activeMetadataServers by 1 . The scaling of activeMetadataServers works only if you have more than one PV. If there is only one PV that is causing CPU load, look at increasing the CPU resource as described above. 7.3.9. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.10. CephMgrIsAbsent Meaning Not having a Ceph manager running the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.11. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.12. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.13. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.14. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.15. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.16. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you can use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.17. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 7.3.18. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.19. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 7.3.20. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.21. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 7.3.22. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide If it is a network issue, escalate to the OpenShift Data Foundation team System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 7.3.23. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 7.3.24. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 7.3.25. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.26. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.27. OSDCPULoadHigh Meaning OSD is a critical component in Ceph storage, responsible for managing data placement and recovery. High CPU usage in the OSD container suggests increased processing demands, potentially leading to degraded storage performance. Impact High Diagnosis Navigate to the Kubernetes dashboard or equivalent. Access the Workloads section and select the relevant pod associated with the OSD alert. Click the Metrics tab to view CPU metrics for the OSD container. Verify that the CPU usage exceeds 80% over a significant period (as specified in the alert configuration). Mitigation If the OSD CPU usage is consistently high, consider taking the following steps: Evaluate the overall storage cluster performance and identify the OSDs contributing to high CPU usage. Increase the number of OSDs in the cluster by adding more new storage devices in the existing nodes or adding new nodes with new storage devices. Review the Scaling storage4 for instructions to help distribute the load and improve overall system performance. 7.3.28. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.3.29. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 7.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 7.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 7.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 7.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 7.9. Resolving low Ceph monitor count alert The CephMonLowNumber alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the low number of Ceph monitor count when your internal mode deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains in the deployment. You can increase the Ceph monitor count to improve the availability of cluster. Procedure In the CephMonLowNumber alert of the notification panel or Alert Center of OpenShift Web Console, click Configure . In the Configure Ceph Monitor pop up, click Update count. In the pop up, the recommended monitor count depending on the number of failure zones is shown. In the Configure CephMon pop up, update the monitor count value based on the recommended value and click Save changes . 7.10. Troubleshooting unhealthy blocklisted nodes 7.10.1. ODFRBDClientBlocked Meaning This alert indicates that an RADOS Block Device (RBD) client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster. Chapter 8. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power Chapter 9. Removing failed or unwanted Ceph Object Storage devices The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs. If you have any failed or unwanted Ceph OSDs to remove: Verify the Ceph health status. For more information see: Verifying Ceph cluster is healthy . Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs. See: Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation . Removing failed or unwanted Ceph OSDs provisioned using local storage devices . If you are using local disks, you can reuse these disks after removing the old OSDs. 9.1. Verifying Ceph cluster is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. 9.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation Follow the steps in the procedure to remove the failed or unwanted Ceph Object Storage Devices (OSDs) in dynamically provisioned Red Hat OpenShift Data Foundation. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Scale down the OSD deployment. Get the osd-prepare pod for the Ceph OSD to be removed. Delete the osd-prepare pod. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete the OSD deployment. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices You can remove failed or unwanted Ceph provisioned Object Storage Devices (OSDs) using local storage devices by following the steps in the procedure. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete persistent volume claim (PVC) resources associated with the failed OSD. Get the PVC associated with the failed OSD. Get the persistent volume (PV) associated with the PVC. Get the failed device name. Get the prepare-pod associated with the failed OSD. Delete the osd-prepare pod before removing the associated PVC. Delete the PVC associated with the failed OSD. Remove failed device entry from the LocalVolume custom resource (CR). Log in to node with the failed device. Record the /dev/disk/by-id/<id> for the failed device name. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink. Get the OSD symlink for the failed device name. Remove the symlink. Delete the PV associated with the OSD. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the Object Storage Device (OSD) removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state. Note You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigate to ensure they are active. Chapter 10. Troubleshooting and deleting remaining resources during Uninstall Occasionally some of the custom resources managed by an operator may remain in "Terminating" status waiting on the finalizer to complete, although you have performed all the required cleanup tasks. In such an event you need to force the removal of such resources. If you do not do so, the resources remain in the Terminating state even after you have performed all the uninstall steps. Check if the openshift-storage namespace is stuck in the Terminating state upon deletion. Output: Check for the NamespaceFinalizersRemaining and NamespaceContentRemaining messages in the STATUS section of the command output and perform the step for each of the listed resources. Example output : Delete all the remaining resources listed in the step. For each of the resources to be deleted, do the following: Get the object kind of the resource which needs to be removed. See the message in the above output. Example : message: Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io Here cephobjectstoreuser.ceph.rook.io is the object kind. Get the Object name corresponding to the object kind. Example : Example output: Patch the resources. Example: Output: Verify that the openshift-storage project is deleted. Output: If the issue persists, reach out to Red Hat Support . Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output : Chapter 12. Restoring the monitor pods in OpenShift Data Foundation Restore the monitor pods if all three of them go down, and when OpenShift Data Foundation is not able to recover the monitor pods automatically. Note This is a disaster recovery procedure and must be performed under the guidance of the Red Hat support team. Contact Red Hat support team on, Red Hat support . Procedure Scale down the rook-ceph-operator and ocs operator deployments. Create a backup of all deployments in openshift-storage namespace. Patch the Object Storage Device (OSD) deployments to remove the livenessProbe parameter, and run it with the command parameter as sleep . Copy tar to the OSDs. Note While copying the tar binary to the OSD, it is important to ensure that the tar binary matches the container image OS of the pod. Copying the binary from a different OS such as, macOS, Ubuntu, and so on might lead to compatibility issues. Retrieve the monstore cluster map from all the OSDs. Create the recover_mon.sh script. Run the recover_mon.sh script. Patch the MON deployments, and run it with the command parameter as sleep . Edit the MON deployments. Patch the MON deployments to increase the initialDelaySeconds . Copy tar to the MON pods. Note While copying the tar binary to the MON, it is important to ensure that the tar binary matches the container image OS of the pod. Copying the binary from a different OS such as, macOS, Ubuntu, and so on might lead to compatibility issues. Copy the previously retrieved monstore to the mon-a pod. Navigate into the MON pod and change the ownership of the retrieved monstore . Copy the keyring template file before rebuilding the mon db . Populate the keyring of all other Ceph daemons (OSD, MGR, MDS and RGW) from their respective secrets. When getting the daemons keyring, use the following command: Get the OSDs keys with the following script: Copy the mon keyring locally, then edit it by adding all daemon keys captured in the earlier step and copy it back to one of the MON pods (mon-a): As an example, the keyring file should look like the following: Note If the caps entries are not present in the OSDs keys output, make sure to add caps to all the OSDs output as mentioned in the keyring file example. Navigate into the mon-a pod, and verify that the monstore has a monmap . Navigate into the mon-a pod. Verify that the monstore has a monmap . Optional: If the monmap is missing then create a new monmap . <mon-a-id> Is the ID of the mon-a pod. <mon-a-ip> Is the IP address of the mon-a pod. <mon-b-id> Is the ID of the mon-b pod. <mon-b-ip> Is the IP address of the mon-b pod. <mon-c-id> Is the ID of the mon-c pod. <mon-c-ip> Is the IP address of the mon-c pod. <fsid> Is the file system ID. Verify the monmap . Import the monmap . Important Use the previously created keyring file. Create a backup of the old store.db file. Copy the rebuild store.db file to the monstore directory. After rebuilding the monstore directory, copy the store.db file from local to the rest of the MON pods. <id> Is the ID of the MON pod Navigate into the rest of the MON pods and change the ownership of the copied monstore . <id> Is the ID of the MON pod Revert the patched changes. For MON deployments: <mon-deployment.yaml> Is the MON deployment yaml file For OSD deployments: <osd-deployment.yaml> Is the OSD deployment yaml file For MGR deployments: <mgr-deployment.yaml> Is the MGR deployment yaml file Important Ensure that the MON, MGR and OSD pods are up and running. Scale up the rook-ceph-operator and ocs-operator deployments. Verification steps Check the Ceph status to confirm that CephFS is running. Example output: Check the Multicloud Object Gateway (MCG) status. It should be active, and the backingstore and bucketclass should be in Ready state. Important If the MCG is not in the active state, and the backingstore and bucketclass not in the Ready state, you need to restart all the MCG related pods. For more information, see Section 12.1, "Restoring the Multicloud Object Gateway" . 12.1. Restoring the Multicloud Object Gateway If the Multicloud Object Gateway (MCG) is not in the active state, and the backingstore and bucketclass is not in the Ready state, you need to restart all the MCG related pods, and check the MCG status to confirm that the MCG is back up and running. Procedure Restart all the pods related to the MCG. <noobaa-operator> Is the name of the MCG operator <noobaa-core> Is the name of the MCG core pod <noobaa-endpoint> Is the name of the MCG endpoint <noobaa-db> Is the name of the MCG db pod If the RADOS Object Gateway (RGW) is configured, restart the pod. <rgw-pod> Is the name of the RGW pod Note In OpenShift Container Platform 4.11, after the recovery, RBD PVC fails to get mounted on the application pods. Hence, you need to restart the node that is hosting the application pods. To get the node name that is hosting the application pod, run the following command: Chapter 13. Restoring ceph-monitor quorum in OpenShift Data Foundation In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon must be healthy. The following steps removes the unhealthy mons from quorum and enables you to form a quorum again with a single mon , then bring the quorum back to the original size. For example, if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon . Procedure Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap . Inject a new monmap . Warning You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b , while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c . Take a backup of the current rook-ceph-mon-b Deployment: Open the YAML file and copy the command and arguments from the mon container (see containers list in the following example). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command as follows: Note Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST , ROOK_CEPH_MON_INITIAL_MEMBERS and ROOK_POD_IP ). Patch the rook-ceph-mon-b Deployment to stop the working of this mon without deleting the mon pod. Perform the following steps on the mon-b pod: Connect to the pod of a healthy mon and run the following commands: Set the variable. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=USD{monmap_path} flag. Review the contents of the monmap . Remove the bad mons from the monmap . In this example we remove mon0 and mon2 : Inject the modified monmap into the good mon , by pasting the ceph mon command and adding the --inject-monmap=USD{monmap_path} flag as follows: Exit the shell to continue. Edit the Rook configmaps . Edit the configmap that the operator uses to track the mons . Verify that in the data element you see three mons such as the following (or more depending on your moncount ): Delete the bad mons from the list to end up with a single good mon . For example: Save the file and exit. Now, you need to adapt a Secret which is used for the mons and other components. Set a value for the variable good_mon_id . For example: You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members . Note If you are using hostNetwork: true , you need to replace the mon_host var with the node IP the mon is pinned to ( nodeSelector ). This is because there is no rook-ceph-mon-* service created in that "mode". Restart the mon . You need to restart the good mon pod with the original ceph-mon command to pick up the changes. Use the oc replace command on the backup of the mon deployment YAML file: Note Option --force deletes the deployment and creates a new one. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again. Delete the two mon deployments that are no longer expected to be in quorum. For example: In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c . Restart the operator. Start the rook operator again to resume monitoring the health of the cluster. Note It is safe to ignore the errors that a number of resources already exist. The operator automatically adds more mons to increase the quorum size again depending on the mon count. Chapter 14. Enabling the Red Hat OpenShift Data Foundation console plugin The Data Foundation console plugin is enabled by default. In case, this option was unchecked during OpenShift Data Foundation Operator installation, use the following instructions to enable the console plugin post-deployment either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under the Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. Chapter 15. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 15.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 15.2, "Tuning the resources for the MCG" . 15.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 15.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) . Chapter 16. Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation When you deploy OpenShift Data Foundation, public IPs are created even when OpenShift is installed as a private cluster. However, you can disable the Multicloud Object Gateway (MCG) load balancer usage by using the disableLoadBalancerService variable in the storagecluster CRD. This restricts MCG from creating any public resources for private clusters and helps to disable the NooBaa service EXTERNAL-IP . Procedure Run the following command and add the disableLoadBalancerService variable in the storagecluster YAML to set the service to ClusterIP: Note To undo the changes and set the service to LoadBalancer, set the disableLoadBalancerService variable to false or remove that line completely. Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global: Chapter 18. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 19. Troubleshooting issues in provider mode 19.1. Force deletion of storage in provider clusters When a client cluster is deleted without performing the offboarding process to remove all the resources from the corresponding provider cluster, you need to perform force deletion of the corresponding storage consumer from the provider cluster. This helps to release the storage space that was claimed by the client. Caution It is recommended to use this method only in unavoidable situations such as accidental deletion of storage client clusters. Prerequisites Access to the OpenShift Data Foundation storage cluster in provider mode. Procedure Click Storage -> Storage Clients from the OpenShift console. Click the delete icon at the far right of the listed storage client cluster. The delete icon is enabled only after 5 minutes of the last heartbeat of the cluster. Click Confirm .
[ "oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 <local-registry> /odf4/odf-must-gather-rhel9:v4.15 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>", "oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ --node-name=_<node-name>_", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 -- /usr/bin/gather <-arg>", "odf get recovery-profile high_recovery_ops", "odf get health Info: Checking if at least three mon pods are running on different nodes rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal Info: Checking mon quorum and ceph health details Info: HEALTH_OK [...]", "odf get dr-health Info: fetching the cephblockpools with mirroring enabled Info: found \"ocs-storagecluster-cephblockpool\" cephblockpool with mirroring enabled Info: running ceph status from peer cluster Info: cluster: id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f health: HEALTH_OK [...]", "odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled. odf get mon-endpoints Displays the mon endpoints odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.", "odf operator rook set ROOK_LOG_LEVEL DEBUG configmap/rook-ceph-operator-config patched", "odf operator rook restart deployment.apps/rook-ceph-operator restarted", "odf restore mon-quorum c", "odf restore deleted cephclusters Info: Detecting which resources to restore for crd \"cephclusters\" Info: Restoring CR my-cluster Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no [...]", "odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>", "odf set ceph log-level osd crush 20", "odf set ceph log-level mds crush 20", "odf set ceph log-level mon crush 20", "oc logs <pod-name> -n <namespace>", "oc logs rook-ceph-operator-<ID> -n openshift-storage", "oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin", "oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin", "oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers", "oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers", "oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin", "oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin", "oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers", "oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers", "oc cluster-info dump -n openshift-storage --output-directory=<directory-name>", "oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>", "oc logs <ocs-operator> -n openshift-storage", "oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'", "oc get events --sort-by=metadata.creationTimestamp -n openshift-storage", "oc get csv -n openshift-storage", "NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.15.0 NooBaa Operator 4.15.0 Succeeded ocs-operator.v4.15.0 OpenShift Container Storage 4.15.0 Succeeded odf-csi-addons-operator.v4.15.0 CSI Addons 4.15.0 Succeeded odf-operator.v4.15.0 OpenShift Data Foundation 4.15.0 Succeeded", "oc get subs -n openshift-storage", "NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.15-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.15 ocs-operator-stable-4.15-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.15 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.15 odf-operator odf-operator redhat-operators stable-4.15", "oc get installplan -n openshift-storage", "oc get pods -o wide | grep <component-name>", "oc get pods -o wide | grep rook-ceph-operator", "rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>", "oc debug node/<node name>", "chroot /host", "crictl images | grep <component>", "crictl images | grep rook-ceph", "oc annotate namespace openshift-storage openshift.io/node-selector=", "delete pod -l app=csi-cephfsplugin -n openshift-storage delete pod -l app=csi-rbdplugin -n openshift-storage", "du -a <path-in-the-mon-node> |sort -n -r |head -n10", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-osd", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"memory\": \"16Gi\"},\"requests\": {\"memory\": \"16Gi\"}}}}}'", "patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"8\"}, \"requests\": {\"cpu\": \"8\"}}}}}'", "patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"managedResources\": {\"cephFilesystems\":{\"activeMetadataServers\": 2}}}}'", "oc project openshift-storage", "get pod | grep rook-ceph-mds", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc get pods | grep mgr", "oc describe pods/ <pod_name>", "oc get pods | grep mgr", "oc project openshift-storage", "get pod | grep rook-ceph-mgr", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-mgr", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-mon", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions", "[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]", "oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "-n openshift-storage get pods", "-n openshift-storage get pods", "-n openshift-storage get pods | grep osd", "-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>", "TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD", "ceph status", "get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'", "describe node <node_name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'", "describe node <node_name>", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "ceph daemon osd.<id> ops", "ceph daemon osd.<id> dump_historic_ops", "oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions", "[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]", "oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage", "ceph osd pool set-quota <pool> max_bytes <bytes>", "ceph osd pool set-quota <pool> max_objects <objects>", "ceph osd pool set-quota <pool> max_bytes <bytes>", "ceph osd pool set-quota <pool> max_objects <objects>", "oc delete pod <pod-name> --grace-period=0 --force", "oc edit configmap rook-ceph-operator-config", "... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG", "oc edit configmap rook-ceph-operator-config", "... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO", "oc get pvc -n openshift-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s", "oc scale deployment rook-ceph-osd-<osd-id> --replicas=0", "oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc", "oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>", "failed_osd_id=<osd-id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -", "oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc delete deployment rook-ceph-osd-<osd-id>", "oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc scale deployment rook-ceph-osd-<osd-id> --replicas=0", "failed_osd_id=<osd_id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -", "oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc", "oc get -n openshift-storage pvc <pvc-name>", "oc get pv <pv-name-from-above-command> -oyaml | grep path", "oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted", "oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>", "oc delete -n openshift-storage pvc <pvc-name-from-step-a>", "oc debug node/<node_with_failed_osd>", "ls -alh /mnt/local-storage/localblock/", "oc debug node/<node_with_failed_osd>", "ls -alh /mnt/local-storage/localblock", "rm /mnt/local-storage/localblock/<failed-device-name>", "oc delete pv <pv-name>", "#oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -", "oc get project -n <namespace>", "NAME DISPLAY NAME STATUS openshift-storage Terminating", "oc get project openshift-storage -o yaml", "status: conditions: - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All resources successfully discovered reason: ResourcesDiscovered status: \"False\" type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: \"False\" type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: \"False\" type: NamespaceDeletionContentFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some resources are remaining: cephobjectstoreusers.ceph.rook.io has 1 resource instances' reason: SomeResourcesRemain status: \"True\" type: NamespaceContentRemaining - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io in 1 resource instances' reason: SomeFinalizersRemain status: \"True\" type: NamespaceFinalizersRemaining", "oc get <Object-kind> -n <project-name>", "oc get cephobjectstoreusers.ceph.rook.io -n openshift-storage", "NAME AGE noobaa-ceph-objectstore-user 26h", "oc patch -n <project-name> <object-kind>/<object-name> --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "oc patch -n openshift-storage cephobjectstoreusers.ceph.rook.io/noobaa-ceph-objectstore-user --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user patched", "oc get project openshift-storage", "Error from server (NotFound): namespaces \"openshift-storage\" not found", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]", "oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file", "Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)", "ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }", "ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs", "ceph osd pool application set <cephfs data pool name> cephfs data cephfs", "ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]", "oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage", "oc scale deployment ocs-operator --replicas=0 -n openshift-storage", "mkdir backup", "cd backup", "oc project openshift-storage", "for d in USD(oc get deployment|awk -F' ' '{print USD1}'|grep -v NAME); do echo USDd;oc get deployment USDd -o yaml > oc_get_deployment.USD{d}.yaml; done", "for i in USD(oc get deployment -l app=rook-ceph-osd -oname);do oc patch USD{i} -n openshift-storage --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' ; oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"osd\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}' ; done", "for i in `oc get pods -l app=rook-ceph-osd -o name | sed -e \"s/pod\\///g\"` ; do cat /usr/bin/tar | oc exec -i USD{i} -- bash -c 'cat - >/usr/bin/tar' ; oc exec -i USD{i} -- bash -c 'chmod +x /usr/bin/tar' ;done", "#!/bin/bash ms=/tmp/monstore rm -rf USDms mkdir USDms for osd_pod in USD(oc get po -l app=rook-ceph-osd -oname -n openshift-storage); do echo \"Starting with pod: USDosd_pod\" podname=USD(echo USDosd_pod|sed 's/pod\\///g') oc exec USDosd_pod -- rm -rf USDms oc exec USDosd_pod -- mkdir USDms oc cp USDms USDpodname:USDms rm -rf USDms mkdir USDms echo \"pod in loop: USDosd_pod ; done deleting local dirs\" oc exec USDosd_pod -- ceph-objectstore-tool --type bluestore --data-path /var/lib/ceph/osd/ceph-USD(oc get USDosd_pod -ojsonpath='{ .metadata.labels.ceph_daemon_id }') --op update-mon-db --no-mon-config --mon-store-path USDms echo \"Done with COT on pod: USDosd_pod\" oc cp USDpodname:USDms USDms echo \"Finished pulling COT data from pod: USDosd_pod\" done", "chmod +x recover_mon.sh", "./recover_mon.sh", "for i in USD(oc get deployment -l app=rook-ceph-mon -oname);do oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'; done", "for i in a b c ; do oc get deployment rook-ceph-mon-USD{i} -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 10000/g\" | oc replace -f - ; done", "for i in `oc get pods -l app=rook-ceph-mon -o name | sed -e \"s/pod\\///g\"` ; do cat /usr/bin/tar | oc exec -i USD{i} -- bash -c 'cat - >/usr/bin/tar' ; oc exec -i USD{i} -- bash -c 'chmod +x /usr/bin/tar' ;done", "oc cp /tmp/monstore/ USD(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\\///g'):/tmp/", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)", "chown -R ceph:ceph /tmp/monstore", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)", "cp /etc/ceph/keyring-store/keyring /tmp/keyring", "cat /tmp/keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"", "oc get secret rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-keyring -ojson | jq .data.keyring | xargs echo | base64 -d [mds.ocs-storagecluster-cephfilesystem-a] key = AQB3r8VgAtr6OhAAVhhXpNKqRTuEVdRoxG4uRA== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\"", "for i in `oc get secret | grep keyring| awk '{print USD1}'` ; do oc get secret USD{i} -ojson | jq .data.keyring | xargs echo | base64 -d ; done", "for i in `oc get pods -l app=rook-ceph-osd -o name | sed -e \"s/pod\\///g\"` ; do oc exec -i USD{i} -- bash -c 'cat /var/lib/ceph/osd/ceph-*/keyring ' ;done", "cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname|sed -e \"s/pod\\///g\"):/etc/ceph/keyring-store/..data/keyring /tmp/keyring-mon-a", "vi /tmp/keyring-mon-a", "[mon.] key = AQCbQLRn0j9mKhAAJKWmMZ483QIpMwzx/yGSLw== caps mon = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-a] key = AQBFQbRnYuB9LxAA8i1fCSAKQQsPuywZ0Jlc5Q== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\" [mds.ocs-storagecluster-cephfilesystem-b] key = AQBHQbRnwHAOEBAAv+rBpYP5W8BmC7gLfLyk1w== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\" [osd.0] key = AQAvQbRnjF0eEhAA3H0l9zvKGZZM9Up6fJajhQ== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.1] key = AQA0QbRnq4cSGxAA7JpuK1+sq8gALNmMYFUMzw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.2] key = AQA3QbRn6JvcOBAAFKruZQhlQJKUOi9oxcN6fw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [client.admin] key = AQCbQLRnSzOuLBAAK1cSgr2eIyrZV8mV28UfvQ== caps mds = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\" caps mgr = \"allow *\" [client.rgw.ocs.storagecluster.cephobjectstore.a] key = AQBTQbRny7NJLRAAPeTvK9kVg71/glbYLANGyw== caps mon = \"allow rw\" caps osd = \"allow rwx\" [mgr.a] key = AQD9QLRn8+xzDxAARqWQatoT9ruK76EpDS6iCw== caps mon = \"allow profile mgr\" caps mds = \"allow *\" caps osd = \"allow *\" [mgr.b] key = AQD9QLRnltZOIhAAexshUqdOr3G79HWYXUDGFg== caps mon = \"allow profile mgr\" caps mds = \"allow *\" caps osd = \"allow *\" [client.crash] key = AQD7QLRn6DDzCBAAEzhXRzGQUBUNTzC3nHntFQ== caps mon = \"allow profile crash\" caps mgr = \"allow rw\" [client.ceph-exporter] key = AQD7QLRntHzkGxAApQTkMVzcTiZn7jZbwK99SQ== caps mon = \"allow profile ceph-exporter\" caps mgr = \"allow r\" caps osd = \"allow r\" caps mds = \"allow r\"", "cp /tmp/keyring-mon-a USD(oc get po -l app=rook-ceph-mon,mon=a -oname|sed -e \"s/pod\\///g\"):/tmp/keyring", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)", "ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap", "monmaptool /tmp/monmap --print", "monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>", "monmaptool /root/monmap --print", "ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap", "chown -R ceph:ceph /tmp/monstore", "mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted", "mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted", "mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted", "mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db", "chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db", "oc cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db", "oc cp /tmp/store.db USD(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph- <id>", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon= <id> -oname)", "chown -R ceph:ceph /var/lib/ceph/mon/ceph- <id> /store.db", "oc replace --force -f <mon-deployment.yaml>", "oc replace --force -f <osd-deployment.yaml>", "oc replace --force -f <mgr-deployment.yaml>", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1", "oc -n openshift-storage scale deployment ocs-operator --replicas=1", "ceph -s", "cluster: id: f111402f-84d1-4e06-9fdb-c27607676e55 health: HEALTH_ERR 1 filesystem is offline 1 filesystem is online with fewer MDS than max_mds 3 daemons have recently crashed services: mon: 3 daemons, quorum b,c,a (age 15m) mgr: a(active, since 14m) mds: ocs-storagecluster-cephfilesystem:0 osd: 3 osds: 3 up (since 15m), 3 in (since 2h) data: pools: 3 pools, 96 pgs objects: 500 objects, 1.1 GiB usage: 5.5 GiB used, 295 GiB / 300 GiB avail pgs: 96 active+clean", "noobaa status -n openshift-storage", "oc delete pods <noobaa-operator> -n openshift-storage", "oc delete pods <noobaa-core> -n openshift-storage", "oc delete pods <noobaa-endpoint> -n openshift-storage", "oc delete pods <noobaa-db> -n openshift-storage", "oc delete pods <rgw-pod> -n openshift-storage", "oc get pods <application-pod> -n <namespace> -o yaml | grep nodeName nodeName: node_name", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0", "oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml", "[...] containers: - args: - --fsid=41a537f2-f282-428e-989f-a9e07be32e47 - --keyring=/etc/ceph/keyring-store/keyring - --log-to-stderr=true - --err-to-stderr=true - --mon-cluster-log-to-stderr=true - '--log-stderr-prefix=debug ' - --default-log-to-file=false - --default-mon-cluster-log-to-file=false - --mon-host=USD(ROOK_CEPH_MON_HOST) - --mon-initial-members=USD(ROOK_CEPH_MON_INITIAL_MEMBERS) - --id=b - --setuser=ceph - --setgroup=ceph - --foreground - --public-addr=10.100.13.242 - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db - --public-bind-addr=USD(ROOK_POD_IP) command: - ceph-mon [...]", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP", "oc -n openshift-storage patch deployment rook-ceph-mon-b --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'", "oc -n openshift-storage exec -it <mon-pod> bash", "monmap_path=/tmp/monmap", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --extract-monmap=USD{monmap_path}", "monmaptool --print /tmp/monmap", "monmaptool USD{monmap_path} --rm <bad_mon>", "monmaptool USD{monmap_path} --rm a monmaptool USD{monmap_path} --rm c", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --inject-monmap=USD{monmap_path}", "oc -n openshift-storage edit configmap rook-ceph-mon-endpoints", "data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789", "data: b=10.100.13.242:6789", "good_mon_id=b", "mon_host=USD(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}') oc -n openshift-storage patch secret rook-ceph-config -p '{\"stringData\": {\"mon_host\": \"[v2:'\"USD{mon_host}\"':3300,v1:'\"USD{mon_host}\"':6789]\", \"mon_initial_members\": \"'\"USD{good_mon_id}\"'\"}}'", "oc replace --force -f rook-ceph-mon-b-deployment.yaml", "oc delete deploy <rook-ceph-mon-1> oc delete deploy <rook-ceph-mon-2>", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1", "oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'", "oc edit storagecluster -n openshift-storage <storagecluster_name>", "oc edit storagecluster -n openshift-storage ocs-storagecluster", "spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi", "oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '", "oc edit storagecluster -n openshift-storage <storagecluster_name> [...] spec: arbiter: {} encryption: kms: {} externalStorage: {} managedResources: cephBlockPools: {} cephCluster: {} cephConfig: {} cephDashboard: {} cephFilesystems: {} cephNonResilientPools: {} cephObjectStoreUsers: {} cephObjectStores: {} cephRBDMirror: {} cephToolbox: {} mirroring: {} multiCloudGateway: disableLoadBalancerService: true <--------------- Add this endpoints: [...]", "GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", "oc adm pod-network make-projects-global openshift-storage" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/troubleshooting_openshift_data_foundation/index
Chapter 3. Build [build.openshift.io/v1]
Chapter 3. Build [build.openshift.io/v1] Description Build encapsulates the inputs needed to produce a new deployable image, as well as the status of the execution and a reference to the Pod which executed the build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object BuildSpec has the information to represent a build and also additional information about a build status object BuildStatus contains the status of a build 3.1.1. .spec Description BuildSpec has the information to represent a build and also additional information about a build Type object Required strategy Property Type Description completionDeadlineSeconds integer completionDeadlineSeconds is an optional duration in seconds, counted from the time when a build pod gets scheduled in the system, that the build may be active on a node before the system actively tries to terminate the build; value must be positive integer mountTrustedCA boolean mountTrustedCA bind mounts the cluster's trusted certificate authorities, as defined in the cluster's proxy configuration, into the build. This lets processes within a build trust components signed by custom PKI certificate authorities, such as private artifact repositories and HTTPS proxies. When this field is set to true, the contents of /etc/pki/ca-trust within the build are managed by the build container, and any changes to this directory or its subdirectories (for example - within a Dockerfile RUN instruction) are not persisted in the build's output image. nodeSelector object (string) nodeSelector is a selector which must be true for the build pod to fit on a node If nil, it can be overridden by default build nodeselector values for the cluster. If set to an empty map or a map with any values, default build nodeselector values are ignored. output object BuildOutput is input to a build strategy and describes the container image that the strategy should produce. postCommit object A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . 1. Shell script: "postCommit": { "script": "rake test --verbose", } The above is a convenient form which is equivalent to: "postCommit": { "command": ["/bin/sh", "-ic"], "args": ["rake test --verbose"] } 2. A command as the image entrypoint: "postCommit": { "commit": ["rake", "test", "--verbose"] } Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint . 3. Pass arguments to the default entrypoint: "postCommit": { "args": ["rake", "test", "--verbose"] } This form is only useful if the image entrypoint can handle arguments. 4. Shell script with arguments: "postCommit": { "script": "rake test USD1", "args": ["--verbose"] } This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be "/bin/sh" and USD1, USD2, etc, are the positional arguments from Args. 5. Command with arguments: "postCommit": { "command": ["rake", "test"], "args": ["--verbose"] } This form is equivalent to appending the arguments to the Command slice. It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. resources ResourceRequirements resources computes resource requirements to execute the build. revision object SourceRevision is the revision or commit information from the source for the build serviceAccount string serviceAccount is the name of the ServiceAccount to use to run the pod created by this build. The pod will be allowed to use secrets referenced by the ServiceAccount source object BuildSource is the SCM used for the build. strategy object BuildStrategy contains the details of how to perform a build. triggeredBy array triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. triggeredBy[] object BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. 3.1.2. .spec.output Description BuildOutput is input to a build strategy and describes the container image that the strategy should produce. Type object Property Type Description imageLabels array imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. imageLabels[] object ImageLabel represents a label applied to the resulting image. pushSecret LocalObjectReference PushSecret is the name of a Secret that would be used for setting up the authentication for executing the Docker push to authentication enabled Docker Registry (or Docker Hub). to ObjectReference to defines an optional location to push the output of this build to. Kind must be one of 'ImageStreamTag' or 'DockerImage'. This value will be used to look up a container image repository to push to. In the case of an ImageStreamTag, the ImageStreamTag will be looked for in the namespace of the build unless Namespace is specified. 3.1.3. .spec.output.imageLabels Description imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. Type array 3.1.4. .spec.output.imageLabels[] Description ImageLabel represents a label applied to the resulting image. Type object Required name Property Type Description name string name defines the name of the label. It must have non-zero length. value string value defines the literal value of the label. 3.1.5. .spec.postCommit Description A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . Shell script: A command as the image entrypoint: Pass arguments to the default entrypoint: Shell script with arguments: Command with arguments: It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. Type object Property Type Description args array (string) args is a list of arguments that are provided to either Command, Script or the container image's default entrypoint. The arguments are placed immediately after the command to be run. command array (string) command is the command to run. It may not be specified with Script. This might be needed if the image doesn't have /bin/sh , or if you do not want to use a shell. In all other cases, using Script might be more convenient. script string script is a shell script to be run with /bin/sh -ic . It may not be specified with Command. Use Script when a shell script is appropriate to execute the post build hook, for example for running unit tests with rake test . If you need control over the image entrypoint, or if the image does not have /bin/sh , use Command and/or Args. The -i flag is needed to support CentOS and RHEL images that use Software Collections (SCL), in order to have the appropriate collections enabled in the shell. E.g., in the Ruby image, this is necessary to make ruby , bundle and other binaries available in the PATH. 3.1.6. .spec.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.7. .spec.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.8. .spec.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.9. .spec.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.10. .spec.source Description BuildSource is the SCM used for the build. Type object Property Type Description binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. configMaps array configMaps represents a list of configMaps and their destinations that will be used for the build. configMaps[] object ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. contextDir string contextDir specifies the sub-directory where the source code for the application exists. This allows to have buildable sources in directory other than root of repository. dockerfile string dockerfile is the raw contents of a Dockerfile which should be built. When this option is specified, the FROM may be modified based on your strategy base image and additional ENV stanzas from your strategy environment will be added after the FROM, but before the rest of your Dockerfile stanzas. The Dockerfile source type may be used with other options like git - in those cases the Git repo will have any innate Dockerfile replaced in the context dir. git object GitBuildSource defines the parameters of a Git SCM images array images describes a set of images to be used to provide source for the build images[] object ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). secrets array secrets represents a list of secrets and their destinations that will be used only for the build. secrets[] object SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. sourceSecret LocalObjectReference sourceSecret is the name of a Secret that would be used for setting up the authentication for cloning private repository. The secret contains valid credentials for remote repository, where the data's key represent the authentication method to be used and value is the base64 encoded credentials. Supported auth methods are: ssh-privatekey. type string type of build input to accept 3.1.11. .spec.source.binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 3.1.12. .spec.source.configMaps Description configMaps represents a list of configMaps and their destinations that will be used for the build. Type array 3.1.13. .spec.source.configMaps[] Description ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. Type object Required configMap Property Type Description configMap LocalObjectReference configMap is a reference to an existing configmap that you want to use in your build. destinationDir string destinationDir is the directory where the files from the configmap should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. 3.1.14. .spec.source.git Description GitBuildSource defines the parameters of a Git SCM Type object Required uri Property Type Description httpProxy string httpProxy is a proxy used to reach the git repository over http httpsProxy string httpsProxy is a proxy used to reach the git repository over https noProxy string noProxy is the list of domains for which the proxy should not be used ref string ref is the branch/tag/ref to build. uri string uri points to the source that will be built. The structure of the source will depend on the type of build to run 3.1.15. .spec.source.images Description images describes a set of images to be used to provide source for the build Type array 3.1.16. .spec.source.images[] Description ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). Type object Required from Property Type Description as array (string) A list of image names that this source will be used in place of during a multi-stage container image build. For instance, a Dockerfile that uses "COPY --from=nginx:latest" will first check for an image source that has "nginx:latest" in this field before attempting to pull directly. If the Dockerfile does not reference an image source it is ignored. This field and paths may both be set, in which case the contents will be used twice. from ObjectReference from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to copy source from. paths array paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. paths[] object ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. pullSecret LocalObjectReference pullSecret is a reference to a secret to be used to pull the image from a registry If the image is pulled from the OpenShift registry, this field does not need to be set. 3.1.17. .spec.source.images[].paths Description paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. Type array 3.1.18. .spec.source.images[].paths[] Description ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. Type object Required sourcePath destinationDir Property Type Description destinationDir string destinationDir is the relative directory within the build directory where files copied from the image are placed. sourcePath string sourcePath is the absolute path of the file or directory inside the image to copy to the build directory. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination. 3.1.19. .spec.source.secrets Description secrets represents a list of secrets and their destinations that will be used only for the build. Type array 3.1.20. .spec.source.secrets[] Description SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. Type object Required secret Property Type Description destinationDir string destinationDir is the directory where the files from the secret should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. Later, when the script finishes, all files injected will be truncated to zero length. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. secret LocalObjectReference secret is a reference to an existing secret that you want to use in your build. 3.1.21. .spec.strategy Description BuildStrategy contains the details of how to perform a build. Type object Property Type Description customStrategy object CustomBuildStrategy defines input parameters specific to Custom build. dockerStrategy object DockerBuildStrategy defines input parameters specific to container image build. jenkinsPipelineStrategy object JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines sourceStrategy object SourceBuildStrategy defines input parameters specific to an Source build. type string type is the kind of build strategy. 3.1.22. .spec.strategy.customStrategy Description CustomBuildStrategy defines input parameters specific to Custom build. Type object Required from Property Type Description buildAPIVersion string buildAPIVersion is the requested API version for the Build object serialized and passed to the custom builder env array (EnvVar) env contains additional environment variables you want to pass into a builder container. exposeDockerSocket boolean exposeDockerSocket will allow running Docker commands (and build container images) from inside the container. forcePull boolean forcePull describes if the controller should configure the build pod to always pull the images for the builder or only pull if it is not present locally from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries secrets array secrets is a list of additional secrets that will be included in the build pod secrets[] object SecretSpec specifies a secret to be included in a build pod and its corresponding mount point 3.1.23. .spec.strategy.customStrategy.secrets Description secrets is a list of additional secrets that will be included in the build pod Type array 3.1.24. .spec.strategy.customStrategy.secrets[] Description SecretSpec specifies a secret to be included in a build pod and its corresponding mount point Type object Required secretSource mountPath Property Type Description mountPath string mountPath is the path at which to mount the secret secretSource LocalObjectReference secretSource is a reference to the secret 3.1.25. .spec.strategy.dockerStrategy Description DockerBuildStrategy defines input parameters specific to container image build. Type object Property Type Description buildArgs array (EnvVar) buildArgs contains build arguments that will be resolved in the Dockerfile. See https://docs.docker.com/engine/reference/builder/#/arg for more details. NOTE: Only the 'name' and 'value' fields are supported. Any settings on the 'valueFrom' field are ignored. dockerfilePath string dockerfilePath is the path of the Dockerfile that will be used to build the container image, relative to the root of the context (contextDir). Defaults to Dockerfile if unset. env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is a reference to an DockerImage, ImageStreamTag, or ImageStreamImage which overrides the FROM image in the Dockerfile for the build. If the Dockerfile uses multi-stage builds, this will replace the image in the last FROM directive of the file. imageOptimizationPolicy string imageOptimizationPolicy describes what optimizations the system can use when building images to reduce the final size or time spent building the image. The default policy is 'None' which means the final build image will be equivalent to an image created by the container image build API. The experimental policy 'SkipLayers' will avoid commiting new layers in between each image step, and will fail if the Dockerfile cannot provide compatibility with the 'None' policy. An additional experimental policy 'SkipLayersAndWarn' is the same as 'SkipLayers' but simply warns if compatibility cannot be preserved. noCache boolean noCache if set to true indicates that the container image build must be executed with the --no-cache=true flag pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 3.1.26. .spec.strategy.dockerStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.27. .spec.strategy.dockerStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 3.1.28. .spec.strategy.dockerStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 3.1.29. .spec.strategy.dockerStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 3.1.30. .spec.strategy.dockerStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 3.1.31. .spec.strategy.jenkinsPipelineStrategy Description JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines Type object Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a build pipeline. jenkinsfile string Jenkinsfile defines the optional raw contents of a Jenkinsfile which defines a Jenkins pipeline build. jenkinsfilePath string JenkinsfilePath is the optional path of the Jenkinsfile that will be used to configure the pipeline relative to the root of the context (contextDir). If both JenkinsfilePath & Jenkinsfile are both not specified, this defaults to Jenkinsfile in the root of the specified contextDir. 3.1.32. .spec.strategy.sourceStrategy Description SourceBuildStrategy defines input parameters specific to an Source build. Type object Required from Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled incremental boolean incremental flag forces the Source build to do incremental builds if true. pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries scripts string scripts is the location of Source scripts volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 3.1.33. .spec.strategy.sourceStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.34. .spec.strategy.sourceStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 3.1.35. .spec.strategy.sourceStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 3.1.36. .spec.strategy.sourceStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 3.1.37. .spec.strategy.sourceStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 3.1.38. .spec.triggeredBy Description triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. Type array 3.1.39. .spec.triggeredBy[] Description BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. Type object Property Type Description bitbucketWebHook object BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. genericWebHook object GenericWebHookCause holds information about a generic WebHook that triggered a build. githubWebHook object GitHubWebHookCause has information about a GitHub webhook that triggered a build. gitlabWebHook object GitLabWebHookCause has information about a GitLab webhook that triggered a build. imageChangeBuild object ImageChangeCause contains information about the image that triggered a build message string message is used to store a human readable message for why the build was triggered. E.g.: "Manually triggered by user", "Configuration change",etc. 3.1.40. .spec.triggeredBy[].bitbucketWebHook Description BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 3.1.41. .spec.triggeredBy[].bitbucketWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.42. .spec.triggeredBy[].bitbucketWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.43. .spec.triggeredBy[].bitbucketWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.44. .spec.triggeredBy[].bitbucketWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.45. .spec.triggeredBy[].genericWebHook Description GenericWebHookCause holds information about a generic WebHook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 3.1.46. .spec.triggeredBy[].genericWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.47. .spec.triggeredBy[].genericWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.48. .spec.triggeredBy[].genericWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.49. .spec.triggeredBy[].genericWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.50. .spec.triggeredBy[].githubWebHook Description GitHubWebHookCause has information about a GitHub webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 3.1.51. .spec.triggeredBy[].githubWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.52. .spec.triggeredBy[].githubWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.53. .spec.triggeredBy[].githubWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.54. .spec.triggeredBy[].githubWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.55. .spec.triggeredBy[].gitlabWebHook Description GitLabWebHookCause has information about a GitLab webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 3.1.56. .spec.triggeredBy[].gitlabWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 3.1.57. .spec.triggeredBy[].gitlabWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 3.1.58. .spec.triggeredBy[].gitlabWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.59. .spec.triggeredBy[].gitlabWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 3.1.60. .spec.triggeredBy[].imageChangeBuild Description ImageChangeCause contains information about the image that triggered a build Type object Property Type Description fromRef ObjectReference fromRef contains detailed information about an image that triggered a build. imageID string imageID is the ID of the image that triggered a new build. 3.1.61. .status Description BuildStatus contains the status of a build Type object Required phase Property Type Description cancelled boolean cancelled describes if a cancel event was triggered for the build. completionTimestamp Time completionTimestamp is a timestamp representing the server time when this Build was finished, whether that build failed or succeeded. It reflects the time at which the Pod running the Build terminated. It is represented in RFC3339 form and is in UTC. conditions array Conditions represents the latest available observations of a build's current state. conditions[] object BuildCondition describes the state of a build at a certain point. config ObjectReference config is an ObjectReference to the BuildConfig this Build is based on. duration integer duration contains time.Duration object describing build time. logSnippet string logSnippet is the last few lines of the build log. This value is only set for builds that failed. message string message is a human-readable message indicating details about why the build has this status. output object BuildStatusOutput contains the status of the built image. outputDockerImageReference string outputDockerImageReference contains a reference to the container image that will be built by this build. Its value is computed from Build.Spec.Output.To, and should include the registry address, so that it can be used to push and pull the image. phase string phase is the point in the build lifecycle. Possible values are "New", "Pending", "Running", "Complete", "Failed", "Error", and "Cancelled". reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. stages array stages contains details about each stage that occurs during the build including start time, duration (in milliseconds), and the steps that occured within each stage. stages[] object StageInfo contains details about a build stage. startTimestamp Time startTimestamp is a timestamp representing the server time when this Build started running in a Pod. It is represented in RFC3339 form and is in UTC. 3.1.62. .status.conditions Description Conditions represents the latest available observations of a build's current state. Type array 3.1.63. .status.conditions[] Description BuildCondition describes the state of a build at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of build condition. 3.1.64. .status.output Description BuildStatusOutput contains the status of the built image. Type object Property Type Description to object BuildStatusOutputTo describes the status of the built image with regards to image registry to which it was supposed to be pushed. 3.1.65. .status.output.to Description BuildStatusOutputTo describes the status of the built image with regards to image registry to which it was supposed to be pushed. Type object Property Type Description imageDigest string imageDigest is the digest of the built container image. The digest uniquely identifies the image in the registry to which it was pushed. Please note that this field may not always be set even if the push completes successfully - e.g. when the registry returns no digest or returns it in a format that the builder doesn't understand. 3.1.66. .status.stages Description stages contains details about each stage that occurs during the build including start time, duration (in milliseconds), and the steps that occured within each stage. Type array 3.1.67. .status.stages[] Description StageInfo contains details about a build stage. Type object Property Type Description durationMilliseconds integer durationMilliseconds identifies how long the stage took to complete in milliseconds. Note: the duration of a stage can exceed the sum of the duration of the steps within the stage as not all actions are accounted for in explicit build steps. name string name is a unique identifier for each build stage that occurs. startTime Time startTime is a timestamp representing the server time when this Stage started. It is represented in RFC3339 form and is in UTC. steps array steps contains details about each step that occurs during a build stage including start time and duration in milliseconds. steps[] object StepInfo contains details about a build step. 3.1.68. .status.stages[].steps Description steps contains details about each step that occurs during a build stage including start time and duration in milliseconds. Type array 3.1.69. .status.stages[].steps[] Description StepInfo contains details about a build step. Type object Property Type Description durationMilliseconds integer durationMilliseconds identifies how long the step took to complete in milliseconds. name string name is a unique identifier for each build step. startTime Time startTime is a timestamp representing the server time when this Step started. it is represented in RFC3339 form and is in UTC. 3.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/builds GET : list or watch objects of kind Build /apis/build.openshift.io/v1/watch/builds GET : watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/builds DELETE : delete collection of Build GET : list or watch objects of kind Build POST : create a Build /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds GET : watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name} DELETE : delete a Build GET : read the specified Build PATCH : partially update the specified Build PUT : replace the specified Build /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds/{name} GET : watch changes to an object of kind Build. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/details PUT : replace details of the specified Build /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks POST : connect POST requests to webhooks of BuildConfig /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks/{path} POST : connect POST requests to webhooks of BuildConfig 3.2.1. /apis/build.openshift.io/v1/builds Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Build Table 3.2. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty 3.2.2. /apis/build.openshift.io/v1/watch/builds Table 3.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. Table 3.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/build.openshift.io/v1/namespaces/{namespace}/builds Table 3.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Build Table 3.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.8. Body parameters Parameter Type Description body DeleteOptions schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Build Table 3.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty HTTP method POST Description create a Build Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body Build schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty 3.2.4. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds Table 3.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Build. deprecated: use the 'watch' parameter with a list operation instead. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name} Table 3.18. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Build Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.21. Body parameters Parameter Type Description body DeleteOptions schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Build Table 3.23. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Build Table 3.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.25. Body parameters Parameter Type Description body Patch schema Table 3.26. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Build Table 3.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.28. Body parameters Parameter Type Description body Build schema Table 3.29. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 3.2.6. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/builds/{name} Table 3.30. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Build. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.7. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/details Table 3.33. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.34. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method PUT Description replace details of the specified Build Table 3.35. Body parameters Parameter Type Description body Build schema Table 3.36. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 3.2.8. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks Table 3.37. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects Table 3.38. Global query parameters Parameter Type Description path string Path is the URL path to use for the current proxy request to pod. HTTP method POST Description connect POST requests to webhooks of BuildConfig Table 3.39. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty 3.2.9. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/webhooks/{path} Table 3.40. Global path parameters Parameter Type Description name string name of the Build namespace string object name and auth scope, such as for teams and projects path string path to the resource Table 3.41. Global query parameters Parameter Type Description path string Path is the URL path to use for the current proxy request to pod. HTTP method POST Description connect POST requests to webhooks of BuildConfig Table 3.42. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty
[ "\"postCommit\": { \"script\": \"rake test --verbose\", }", "The above is a convenient form which is equivalent to:", "\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }", "\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }", "Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.", "\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }", "This form is only useful if the image entrypoint can handle arguments.", "\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }", "This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.", "\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }", "This form is equivalent to appending the arguments to the Command slice." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/build-build-openshift-io-v1
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_openstack_platform_at_scale/proc_providing-feedback-on-red-hat-documentation
Chapter 4. Application streams
Chapter 4. Application streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/application-streams_considerations-in-adopting-rhel-8
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/Accessing-the-RADOS-Object-Gateway-S3-endpoint_rhodf
Chapter 1. OpenShift Service Mesh release notes
Chapter 1. OpenShift Service Mesh release notes 1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Important Red Hat OpenShift Service Mesh 3.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2. Red Hat OpenShift Service Mesh 3.0 Technology Preview This release of Red Hat OpenShift Service Mesh is Technology Preview. 1.2.1. Supported component versions for Technology Preview 2 Component Version OpenShift Container Platform 4.14 and later Istio 1.24.1 Envoy Proxy 1.32 Kiali Operator 2.1 Kiali Server 2.1 Important You need to remove the Kiali custom resources (CR) from Technology Preview 1 before you update to Kiali Operator provided by Red Hat 2.1. Kiali Operator provided by Red Hat 2.1 is available in the candidates channel. The candidate channel offers unsupported early access to releases as soon as they are built. Releases present only in candidate channels might not contain the full feature set of eventual GA releases, or features might be removed before GA. Additionally, these releases have not been subject to full Red Hat Quality Assurance and might not offer update paths to later GA releases. Given these caveats, the candidate channel is suitable only for testing purposes where deleting and re-creating a cluster is acceptable. 1.2.2. Unavailable features in Technology Preview 2 The following features are not supported in the Technology Preview 2 release: Ambient mode in Istio Virtual Machine support in Istio 1.2.3. Unavailable clusters in Technology Preview 2 The following clusters are not supported in the Technology Preview 2 release: Production clusters 1.2.4. Supported component versions for Technology Preview 1 Component Version OpenShift Container Platform 4.14+ Istio 1.23.0 Envoy Proxy 1.31 Kiali Operator 1.89 Kiali Server 1.89 1.2.5. Unavailable features in Technology Preview 1 The following features are not supported in the Technology Preview 1 release: OpenShift Service Mesh Console (OSSMC) plugin Ambient mode in Istio Virtual Machine support in Istio 1.2.6. Unavailable clusters in Technology Preview 1 The following clusters are not supported in the Technology Preview 1 release: Production clusters
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/release_notes/ossm-release-notes-assembly
Chapter 9. OpenStack Cloud Controller Manager reference guide
Chapter 9. OpenStack Cloud Controller Manager reference guide 9.1. The OpenStack Cloud Controller Manager In OpenShift Container Platform 4.12, clusters that run on Red Hat OpenStack Platform (RHOSP) are switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the Cloud Controller Manager . To preserve user-defined configurations for the legacy cloud provider, existing configurations are mapped to new ones as part of the migration process. It searches for a configuration called cloud-provider-config in the openshift-config namespace. Note The config map name cloud-provider-config is not statically configured. It is derived from the spec.cloudConfig.name value in the infrastructure/cluster CRD. Found configurations are synchronized to the cloud-conf config map in the openshift-cloud-controller-manager namespace. As part of this synchronization, the OpenStack CCM Operator alters the new config map such that its properties are compatible with the external cloud provider. The file is changed in the following ways: The [Global] secret-name , [Global] secret-namespace , and [Global] kubeconfig-path options are removed. They do not apply to the external cloud provider. The [Global] use-clouds , [Global] clouds-file , and [Global] cloud options are added. The entire [BlockStorage] section is removed. External cloud providers no longer perform storage operations. Block storage configuration is managed by the Cinder CSI driver. Additionally, the CCM Operator enforces a number of default options. Values for these options are always overriden as follows: [Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack ... [LoadBalancer] use-octavia = true enabled = true 1 1 If the network is configured to use Kuryr, this value is false . The clouds-value value, /etc/openstack/secret/clouds.yaml , is mapped to the openstack-cloud-credentials config in the openshift-cloud-controller-manager namespace. You can modify the RHOSP cloud in this file as you do any other clouds.yaml file. 9.2. The OpenStack Cloud Controller Manager (CCM) config map An OpenStack CCM config map defines how your cluster interacts with your RHOSP cloud. By default, this configuration is stored under the cloud.conf key in the cloud-conf config map in the openshift-cloud-controller-manager namespace. Important The cloud-conf config map is generated from the cloud-provider-config config map in the openshift-config namespace. To change the settings that are described by the cloud-conf config map, modify the cloud-provider-config config map. As part of this synchronization, the CCM Operator overrides some options. For more information, see "The RHOSP Cloud Controller Manager". For example: An example cloud-conf config map apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] use-octavia = True kind: ConfigMap metadata: creationTimestamp: "2022-12-20T17:01:08Z" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: "2519" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677 1 Set global options by using a clouds.yaml file rather than modifying the config map. The following options are present in the config map. Except when indicated otherwise, they are mandatory for clusters that run on RHOSP. 9.2.1. Load balancer options CCM supports several load balancer options for deployments that use Octavia. Note Neutron-LBaaS support is deprecated. Option Description enabled Whether or not to enable the LoadBalancer type of services integration. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. floating-subnet-id Optional. The external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-id . floating-subnet Optional. A name pattern (glob or regular expression if starting with ~ ) for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet . If multiple subnets match the pattern, the first one with available IP addresses is used. floating-subnet-tags Optional. Tags for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-tags . If multiple subnets match these tags, the first one with available IP addresses is used. If the RHOSP network is configured with sharing disabled, for example, with the --no-share flag used during creation, this option is unsupported. Set the network to share to use this option. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. network-id The ID of the Networking service network on which load balancer VIPs are created. Unnecessary if subnet-id is set. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . internal-lb Whether or not to create an internal load balancer without floating IP addresses. The default value is false . LoadBalancerClass "ClassName" This is a config section that comprises a set of options: floating-network-id floating-subnet-id floating-subnet floating-subnet-tags network-id subnet-id The behavior of these options is the same as that of the identically named options in the load balancer section of the CCM config file. You can set the ClassName value by specifying the service annotation loadbalancer.openstack.org/class . max-shared-lb The maximum number of services that can share a load balancer. The default value is 2 . 9.2.2. Options that the Operator overrides The CCM Operator overrides the following options, which you might recognize from configuring RHOSP. Do not configure them yourself. They are included in this document for informational purposes only. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . os-endpoint-type The type of endpoint to use from the service catalog. username The Identity service user name. password The Identity service user password. domain-id The Identity service user domain ID. domain-name The Identity service user domain name. tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. tenant-domain-id The Identity service project domain ID. tenant-domain-name The Identity service project domain name. user-domain-id The Identity service user domain ID. user-domain-name The Identity service user domain name. use-clouds Whether or not to fetch authorization credentials from a clouds.yaml file. Options set in this section are prioritized over values read from the clouds.yaml file. CCM searches for the file in the following places: The value of the clouds-file option. A file path stored in the environment variable OS_CLIENT_CONFIG_FILE . The directory pkg/openstack . The directory ~/.config/openstack . The directory /etc/openstack . clouds-file The file path of a clouds.yaml file. It is used if the use-clouds option is set to true . cloud The named cloud in the clouds.yaml file that you want to use. It is used if the use-clouds option is set to true .
[ "[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] use-octavia = true enabled = true 1", "apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] use-octavia = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_openstack/installing-openstack-cloud-config-reference
Chapter 8. Important links
Chapter 8. Important links Red Hat AMQ Broker 7.7 Release Notes Red Hat AMQ Broker 7.6 Release Notes Red Hat AMQ Broker 7.1 to 7.5 Release Notes (aggregated) Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2022-03-15 13:56:47 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_red_hat_amq_broker_7.8/links
7.12. RHEA-2014:1518 - new packages: libee
7.12. RHEA-2014:1518 - new packages: libee New libee packages are now available for Red Hat Enterprise Linux 6. The libee packages contain an event expression library inspired by the Common Event Expression (CEE), a standard proposed by the MITRE organization that is used to describe network events in a number of normalized formats. Its goal is to unify many different representations that exist in the industry. The core idea of libee is to provide a small API layer above the CEE standard. This enhancement update adds the libee packages to Red Hat Enterprise Linux 6. (BZ# 966972 ) All users who require libee are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1518
10.2.4. Modules and Apache HTTP Server 2.0
10.2.4. Modules and Apache HTTP Server 2.0 In Apache HTTP Server 2.0, the module system has been changed to allow modules to be chained together or combined in new and interesting ways. Common Gateway Interface ( CGI ) scripts, for example, can generate server-parsed HTML documents which can then be processed by mod_include . This opens up a tremendous number of possibilities with regards to how modules can be combined to achieve a specific goal. The way this works is that each request is served by exactly one handler module followed by zero or more filter modules. Under Apache HTTP Server 1.3, for example, a Perl script would be handled in its entirety by the Perl module ( mod_perl ). Under Apache HTTP Server 2.0, the request is initially handled by the core module - which serves static files - and is then filtered by mod_perl . Exactly how to use this, and all other new features of Apache HTTP Server 2.0, is beyond the scope of this document; however, the change has ramifications if the PATH_INFO directive is used for a document which is handled by a module that is now implemented as a filter, as each contains trailing path information after the true file name. The core module, which initially handles the request, does not by default understand PATH_INFO and returns 404 Not Found errors for requests that contain such information. As an alternative, use the AcceptPathInfo directive to coerce the core module into accepting requests with PATH_INFO . The following is an example of this directive: For more on this topic, refer to the following documentation on the Apache Software Foundation's website: http://httpd.apache.org/docs-2.0/mod/core.html#acceptpathinfo http://httpd.apache.org/docs-2.0/handler.html http://httpd.apache.org/docs-2.0/filter.html 10.2.4.1. The suexec Module In Apache HTTP Server 2.0, the mod_suexec module uses the SuexecUserGroup directive, rather than the User and Group directives, which is used for configuring virtual hosts. The User and Group directives can still be used in general, but are deprecated for configuring virtual hosts. For example, the following is a sample Apache HTTP Server 1.3 directive: To migrate this setting to Apache HTTP Server 2.0, use the following structure:
[ "AcceptPathInfo on", "<VirtualHost vhost.example.com:80> User someone Group somegroup </VirtualHost>", "<VirtualHost vhost.example.com:80> SuexecUserGroup someone somegroup </VirtualHost>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-httpd-v2-mig-mod
14.8. Backing up and Restoring Certificate System
14.8. Backing up and Restoring Certificate System Certificate System does not include backup and restore tools. However, the Certificate System components can still be archived and restored manually, which can be necessary for deployments where information cannot be accessed if certificate or key information is lost. Three major parts of Certificate System need to be backed up routinely in case of data loss or hardware failure: Internal database. Subsystems use an LDAP database to store their data. The Directory Server provides its own backup scripts and procedures. Security databases. The security databases store the certificate and key material. If these are stored on an HSM, then consult the HSM vendor documentation for information on how to back up the data. If the information is stored in the default directories in the instance alias directory, then it is backed up with the instance directory. To back it up separately, use a utility such as tar or zip . Instance directory. The instance directory contains all configuration files, security databases, and other instance files. This can be backed up using a utility such as tar or zip . 14.8.1. Backing up and Restoring the LDAP Internal Database The Red Hat Directory Server documentation contains more detailed information on backing up and restoring the databases. 14.8.1.1. Backing up the LDAP Internal Database Two pairs of subcommands of the dsctl command are available to back up the Directory Server instance. Each back-up subcommand has a counterpart to restore the files it generated: The db2ldif subcommand creates a LDIF file you can restore using the ldif2db subcommand. The db2bak subcommand creates a backup file you can restore using the bak2db subcommand. 14.8.1.1.1. Backing up using db2ldif Running the db2ldif subcommand backs up a single subsystem database. Note As the db2ldif subcommand runs with the dirsrv user, it doesn't have permissions to write under the /root/ directory, so you need to provide a path where it can write. Back up each Directory Server database used by PKI subsystems. You can use the pki-server ca-db-config-show command to check the database name for a given subsystem. For example, to back up the main database, userRoot : Stop the instance: Export the database into an LDIF file: Start the instance: To restore the LDIF file using the ldif2db subcommand, see Section 14.8.1.2.1, "Restoring using ldif2db" . 14.8.1.1.2. Backing up using db2bak Running the db2bak subcommand backs up all Certificate System subsystem databases for that Directory Server (and any other databases maintained by that Directory Server instance). For example: For example: Stop the instance: Backup the database: Start the instance: Note As the db2bak subcommand runs with the dirsrv user, the target directory must be writeable by dirsrv . Running the subcommand without any argument creates the backup in the /var/lib/dirsrv/slapd- <instance_name> /bak folder where db2bak has the proper write permissions. To restore the LDIF file using bak2db , see Section 14.8.1.2.2, "Restoring using bak2db" . 14.8.1.2. Restoring the LDAP Internal Database Depending on how you backed up the Directory Server instance, use ldif2db or bak2db with the corresponding file(s) to restore the database. Note Make sure you stop the instance before restoring databases. 14.8.1.2.1. Restoring using ldif2db If you created a LDIF file with db2ldif , stop the Directory Server instance and import the files using the ldif2db subcommand. You can specify a single database to restore from the backup. For example, for the main database, userRoot : Stop the Directory Server instance: Import the data from the LDIF file: Start the Directory Server instance: 14.8.1.2.2. Restoring using bak2db If you created a backup file with db2bak , stop the Directory Server and import the file using the bak2db subcommand. For example: Stop the Directory Server instance: Restore the databases: Start the Directory Server instance: 14.8.2. Backing up and Restoring the Instance Directory The instance directory has all of the configuration information for the subsystem instance, so backing up the instance directory preserves the configuration information not contained in the internal database. Note Stop the subsystem instance before backing up the instance or the security databases. Stop the subsystem instance. Save the directory to a compressed file: For example: Restart the subsystem instance. You can use the Certificate System backup files, both the alias database backups and the full instance directory backups, to replace the current directories if the data is corrupted or the hardware is damaged. To restore the data, uncompress the archive file using the unzip or tar tools, and copy the archive over the existing files. To restore the instance directory: Uncompress the archive: For example: Stop the subsystem instance if it is not already stopped. Copy the archived files to restore the instance directory: For example: Make sure the ownership and group permissions of the restored files are set to the pkiuser : Restart the subsystem instance.
[ "dsctl instance_name stop", "dsctl instance_name db2ldif userroot /tmp/example.ldif OK group dirsrv exists OK user dirsrv exists ldiffile: /tmp/example.ldif [18/Jul/2018:10:46:03.353656777 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [18/Jul/2018:10:46:03.383101305 +0200] - INFO - ldbm_back_ldbm2ldif - export userroot: Processed 160 entries (100%). [18/Jul/2018:10:46:03.391553963 +0200] - INFO - dblayer_pre_close - All database threads now stopped db2ldif successful", "dsctl instance_name start", "dsctl instance_name stop", "dsctl instance_name db2bak OK group dirsrv exists OK user dirsrv exists [18/Jul/2018:14:02:37.358958713 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 db2bak successful", "dsctl instance_name start", "dsctl instance_name stop", "dsctl instance_name ldif2db userroot /tmp/example.ldif OK group dirsrv exists OK user dirsrv exists [17/Jul/2018:13:42:42.015554231 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [17/Jul/2018:13:42:44.302630629 +0200] - INFO - import_main_offline - import userroot: Import complete. Processed 160 entries in 2 seconds. (80.00 entries/sec) ldif2db successful", "dsctl instance_name start", "dsctl instance_name stop", "dsctl instance_name bak2db /var/lib/dirsrv/slapd-instance_name/bak/instance_name-time_stamp/ OK group dirsrv exists OK user dirsrv exists [20/Jul/2018:15:52:24.932598675 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 bak2db successful", "dsctl instance_name start", "pki-server stop instance_name", "cd /var/lib/pki/ tar -chvf /export/archives/pki/ instance_name .tar instance_name /", "cd /var/lib/pki/ tar -chvf /tmp/test.tar pki-tomcat/ca/ pki-tomcat/ca/ pki-tomcat/ca/registry/ pki-tomcat/ca/registry/ca/ ........", "pki-server start instance_name", "cd /export/archives/pki/ tar -xvf instance_name .tar", "cd /tmp/ tar -xvf test.tar pki-tomcat/ca/ pki-tomcat/ca/registry/ pki-tomcat/ca/registry/ca/ pki-tomcat/ca/registry/ca/default.cfg ......", "pki-server stop instance_name", "cp -r /export/archives/pki/ instance_name /var/lib/pki/ instance_name", "cp -r /tmp/pki-tomcat/ca/ /var/lib/pki/pki-tomcat/ca/", "chown -R pkiuser:pkiuser /var/lib/pki/pki-tomcat/ca/", "pki-server start instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/backing_up_and_restoring_crts
function::user_char_warn
function::user_char_warn Name function::user_char_warn - Retrieves a char value stored in user space. Synopsis Arguments addr The user space address to retrieve the char from. General Syntax user_char_warn:long(addr:long) Description Returns the char value from a given user space address. Returns zero when user space and warns (but does not abort) about the failure.
[ "function user_char_warn:long(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-char-warn
Chapter 4. Deprecated components
Chapter 4. Deprecated components The components listed in this section have been deprecated. 4.1. Business Optimizer Business Optimizer (OptaPlanner) 8.13.x, included as part of Red Hat Decision Manager, is in maintenance support. For the latest supported versions of OptaPlanner (8.29 and later) upgrade to Red Hat build of OptaPlanner, the newest addition to Red Hat Application Foundations. For more info, see Red Hat build of OptaPlanner is now available in Red Hat Application Foundations . 4.2. OptaPlanner 7 Both OptaPlanner 7 and OptaPlanner 8 are included with Red Hat Decision Manager 7.13, but OptaPlanner 7 is deprecated and might be removed in a future release. For information about migrating your OptaPlanner 7 projects to OptaPlanner 8, see Upgrading your Red Hat build of OptaPlanner projects to OptaPlanner 8 . 4.3. OptaPlanner tooling components in Business Central The following OptaPlanner tooling in Business Central is part of OptaPlanner 7. They are deprecated and might be removed in a future release. Data modeler annotations Guided rule editor actions for OptaPlanner score modification Solver assets 4.4. Unified product deliverable and deprecation of Red Hat Decision Manager distribution files In the Red Hat Process Automation Manager 7.13 release, the distribution files for Red Hat Decision Manager will be replaced with Red Hat Process Automation Manager files. Note that there will not be any change to the Red Hat Decision Manager subscription and the support entitlements and fees will remain the same. Red Hat Decision Manager is a subset of Red Hat Process Automation Manager, and Red Hat Decision Manager subscribers will continue to receive full support for the decision management and optimization capabilities. The business process management (BPM) capabilities are exclusive to Red Hat Process Automation Manager and will be available for use by Red Hat Decision Manager subscribers but with development support services only. Red Hat Decision Manager subscribers can upgrade to a full Red Hat Process Automation Manager subscription at any time to receive full support for BPM features. Red Hat Decision Manager container images are now deprecated with unified deliverables. Red Hat Decision Manager subscribers can upgrade or install the latest Red Hat Process Automation Manager images from version 7.13 onward instead. 4.5. Red Hat OpenShift Container Platform 3 Support for Red Hat OpenShift Container Platform 3 is removed in this release 4.6. Red Hat Enterprise Linux 7 Support for Red Hat Enterprise Linux 7 is deprecated in Red Hat Decision Manager and features and will be removed in a future release. 4.7. Support for JDK 8 Support for JDK 8 is deprecated in Red Hat Decision Manager and might be removed in a future release. For a complete list of supported JDK configurations, see Red Hat Decision Manager 7 Supported Configurations . 4.8. Legacy kie-pmml dependency The legacy kie-pmml dependency was deprecated with Red Hat Decision Manager 7.10.0 and will be replaced in a future Red Hat Decision Manager release. For more information, see Designing a decision service using PMML models . 4.9. Support for OSGi framework integration Support for integration with the OSGi framework is deprecated in Red Hat Decision Manager. It does not receive any new enhancements or features and will be removed in a future release. The decision and process engine integration with the OSGi framework is currently incompatible with Fuse version 7.8. If you intend to use the OSGi framework, continue to use Red Hat Decision Manager version 7.9 with Fuse version 7.7 until Fuse version 7.9 is available and certified. 4.10. Support for the RuleUnit API The Red Hat Decision Manager RuleUnit API is deprecated due to incompatibility with the Kogito RuleUnit API. 4.11. Legacy Test Scenarios tool The legacy Test Scenarios tool was deprecated in Red Hat Decision Manager version 7.3.0. It will be removed in a future Red Hat Decision Manager release. Use the new Test Scenarios designer instead. 4.12. Support for HACEP High available (HA) event-driven decisioning, including Complex Event Processing (CEP), is deprecated due to end of support for AMQ Streams 1.x.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/rn-deprecated-issues-ref
Chapter 2. Improving the performance of views
Chapter 2. Improving the performance of views The performance of view-based hierarchies depends on the construction of the hierarchy itself and the number of entries in the directory tree (DIT). In general, there may be a marginal change in performance (within a few percentage points of equivalent searches on a standard DIT) if you use virtual DIT views. If you do not invoke a view in a search, then there is no performance impact. Test the virtual DIT views against expected search patterns and loads before deployment. Red Hat recommends indexing the attributes used in view filters if you intend to use the views as general-purpose navigation tools in the organization. Further, you can configure a virtual list view (VLV) index to be used in evaluation of sub-filters in views. There is no need to tune any other part of the directory specifically for views. 2.1. Creating indexes to improve the performance of views using the command line Views are derived from search results based on a given filter. Part of the filter are the attributes given explicitly in the nsViewFilter ; the rest of the filter is based on the entry hierarchy, looking for the entryid and parentid operational attributes of the actual entries included in the view. (|(parentid= search_base_id )(entryid= search_base_id ) If any of the searched attributes - entryid , parentid , or the attributes in the nsViewFilter - are not indexed, then the search becomes partially unindexed and Directory Server searches the entire directory tree for matching entries. To improve views performance, create the indexes as follows: Create equality index ( eq ) for entryid . The parentid attribute is indexed in the system index by default. If a filter in nsViewFilter tests presence ( attribute=* ), then create presence index ( pres ) for the attribute being tested. You should use this index type only with attributes that appear in a minority of directory entries. If a filter in nsViewFilter tests equality ( attribute=value ), create equality index ( eq ) for the attribute being tested. If a filter in nsViewFilter tests a substring ( attribute=value* ), create substring index ( sub ) for the attribute being tested. If a filter in nsViewFilter tests approximation ( attribute~=value ), create approximate index ( approximate ) for the attribute being tested. For example, when you use the following view filter: nsViewFilter: (&(objectClass=inetOrgPerson)(roomNumber=*66)) you should index objectClass with the equality index , which is done by default, and roomNumber with the substring index . Prerequisites You are aware of the attributes that you use in a view filter. Procedure Optional: List the back ends to determine the database to index: # dsconf -D " cn=Directory Manager " instance_name backend suffix list dc=example,dc=com (userroot) Note the selected database name (in parentheses). Create index configuration with the dsconfig utility for the selected back-end database. Specify the attribute name, index type, and, optionally, matching rules to set collation order (OID), especially in case of an internationalized instance. # dsconf -D " cn=Directory Manager " instance_name backend index add --attr roomNumber --index-type sub userroot Repeat this step for each attribute used in the view filter. Reindex the database to apply the new indexes: # dsconf -D " cn=Directory Manager " instance_name backend index reindex userroot Verification Perform a search that is based on the standard directory tree with the same filter that you use in the view: # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -x -b dc=example,dc=com (&(objectClass=inetOrgPerson)(roomNumber=*66)) # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -x -b dc=example,dc=com " (&(objectClass=inetOrgPerson)(roomNumber=*66)) " View the access log in /var/log/dirsrv/slapd- instance_name /access . The RESULT of your search should not contain note=U or Partially Unindexed Filter in the details. Additional resources Managing indexes 2.2. Creating indexes to improve the performance of views using the web console Views are derived from search results based on a given filter. Part of the filter are the attributes given explicitly in the nsViewFilter ; the rest of the filter is based on the entry hierarchy, looking for the entryid and parentid operational attributes of the actual entries included in the view. (|(parentid= search_base_id )(entryid= search_base_id ) If any of the searched attributes - entryid , parentid , or the attributes in the nsViewFilter - are not indexed, then the search becomes partially unindexed and Directory Server searches the entire directory tree for matching entries. To improve views performance, create the indexes as follows: Create equality index ( eq ) for entryid . The parentid attribute is indexed in the system index by default. If a filter in nsViewFilter tests presence ( attribute=* ), then create presence index ( pres ) for the attribute being tested. You should use this index type only with attributes that appear in a minority of directory entries. If a filter in nsViewFilter tests equality ( attribute=value ), create equality index ( eq ) for the attribute being tested. If a filter in nsViewFilter tests a substring ( attribute=value* ), create substring index ( sub ) for the attribute being tested. If a filter in nsViewFilter tests approximation ( attribute~=value ), create approximate index ( approximate ) for the attribute being tested. For example, when you use the following view filter: nsViewFilter: (&(objectClass=inetOrgPerson)(roomNumber=*66)) you should index objectClass with the equality index , which is done by default, and roomNumber with the substring index . Prerequisites You are logged in to the instance in the web console. You are aware of the attributes that you use in a view filter. Procedure Under Database , select a suffix from the configuration tree for which you want to create an index. Navigate to Indexes and Database Indexes . Click the Add Index button. Type the name of the attribute and select the attribute. Select the Index Types that should be created for this attribute. Optionally, add Matching Rules to specify collation order (OID), especially in case of an internationalized instance. Select Index attribute after creation to rebuild the index afterwards. Click Create Index . Repeat the steps for each attribute to be indexed. Verification Filter Indexes by typing the name of the added attribute. The newly indexed attribute should appear in the results. Additional resources Managing indexes
[ "(|(parentid= search_base_id )(entryid= search_base_id )", "nsViewFilter: (&(objectClass=inetOrgPerson)(roomNumber=*66))", "dsconf -D \" cn=Directory Manager \" instance_name backend suffix list dc=example,dc=com (userroot)", "dsconf -D \" cn=Directory Manager \" instance_name backend index add --attr roomNumber --index-type sub userroot", "dsconf -D \" cn=Directory Manager \" instance_name backend index reindex userroot", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -b dc=example,dc=com (&(objectClass=inetOrgPerson)(roomNumber=*66)) ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -b dc=example,dc=com \" (&(objectClass=inetOrgPerson)(roomNumber=*66)) \"", "(|(parentid= search_base_id )(entryid= search_base_id )", "nsViewFilter: (&(objectClass=inetOrgPerson)(roomNumber=*66))" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/assembly_improving-the-performance-of-views_tuning-the-performance-of-rhds
Chapter 143. HBase Component
Chapter 143. HBase Component Available as of Camel version 2.10 This component provides an idemptotent repository, producers and consumers for Apache HBase . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hbase</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 143.1. Apache HBase Overview HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured Data. You can use HBase when you need random, realtime read/write access to your Big Data. More information at Apache HBase . 143.2. Camel and HBase When using a datasotre inside a camel route, there is always the chalenge of specifying how the camel message will stored to the datastore. In document based stores things are more easy as the message body can be directly mapped to a document. In relational databases an ORM solution can be used to map properties to columns etc. In column based stores things are more challenging as there is no standard way to perform that kind of mapping. HBase adds two additional challenges: HBase groups columns into families, so just mapping a property to a column using a name convention is just not enough. HBase doesn't have the notion of type, which means that it stores everything as byte[] and doesn't know if the byte[] represents a String, a Number, a serialized Java object or just binary data. To overcome these challenges, camel-hbase makes use of the message headers to specify the mapping of the message to HBase columns. It also provides the ability to use some camel-hbase provided classes that model HBase data and can be easily convert to and from xml/json etc. Finally it provides the ability to the user to implement and use his own mapping strategy. Regardless of the mapping strategy camel-hbase will convert a message into an org.apache.camel.component.hbase.model.HBaseData object and use that object for its internal operations. 143.3. Configuring the component The HBase component can be provided a custom HBaseConfiguration object as a property or it can create an HBase configuration object on its own based on the HBase related resources that are found on classpath. <bean id="hbase" class="org.apache.camel.component.hbase.HBaseComponent"> <property name="configuration" ref="config"/> </bean> If no configuration object is provided to the component, the component will create one. The created configuration will search the class path for an hbase-site.xml file, from which it will draw the configuration. You can find more information about how to configure HBase clients at: HBase client configuration and dependencies 143.4. HBase Producer As mentioned above camel provides produers endpoints for HBase. This allows you to store, delete, retrieve or query data from HBase using your camel routes. hbase://table[?options] where table is the table name. The supported operations are: Put Get Delete Scan 143.4.1. Supported URI options The HBase component supports 3 options, which are listed below. Name Description Default Type configuration (advanced) To use the shared configuration Configuration poolMaxSize (common) Maximum number of references to keep for each table in the HTable pool. The default value is 10. 10 int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The HBase endpoint is configured using URI syntax: with the following path and query parameters: 143.4.2. Path Parameters (1 parameters): Name Description Default Type tableName Required The name of the table String 143.4.3. Query Parameters (16 parameters): Name Description Default Type cellMappingStrategyFactory (common) To use a custom CellMappingStrategyFactory that is responsible for mapping cells. CellMappingStrategy Factory filters (common) A list of filters to use. List mappingStrategyClassName (common) The class name of a custom mapping strategy implementation. String mappingStrategyName (common) The strategy to use for mapping Camel messages to HBase columns. Supported values: header, or body. String rowMapping (common) To map the key/values from the Map to a HBaseRow. The following keys is supported: rowId - The id of the row. This has limited use as the row usually changes per Exchange. rowType - The type to covert row id to. Supported operations: CamelHBaseScan. family - The column family. Supports a number suffix for referring to more than one columns. qualifier - The column qualifier. Supports a number suffix for referring to more than one columns. value - The value. Supports a number suffix for referring to more than one columns valueType - The value type. Supports a number suffix for referring to more than one columns. Supported operations: CamelHBaseGet, and CamelHBaseScan. Map rowModel (common) An instance of org.apache.camel.component.hbase.model.HBaseRow which describes how each row should be modeled HBaseRow userGroupInformation (common) Defines privileges to communicate with HBase such as using kerberos. UserGroupInformation bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean maxMessagesPerPoll (consumer) Gets the maximum number of messages as a limit to poll at each polling. Is default unlimited, but use 0 or negative number to disable it as unlimited. int operation (consumer) The HBase operation to perform String remove (consumer) If the option is true, Camel HBase Consumer will remove the rows which it processes. true boolean removeHandler (consumer) To use a custom HBaseRemoveHandler that is executed when a row is to be removed. HBaseRemoveHandler exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern maxResults (producer) The maximum number of rows to scan. 100 int synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 143.5. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.hbase.configuration To use the shared configuration. The option is a org.apache.hadoop.conf.Configuration type. String camel.component.hbase.enabled Enable hbase component true Boolean camel.component.hbase.pool-max-size Maximum number of references to keep for each table in the HTable pool. The default value is 10. 10 Integer camel.component.hbase.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 143.5.1. Put Operations. HBase is a column based store, which allows you to store data into a specific column of a specific row. Columns are grouped into families, so in order to specify a column you need to specify the column family and the qualifier of that column. To store data into a specific column you need to specify both the column and the row. The simplest scenario for storing data into HBase from a camel route, would be to store part of the message body to specified HBase column. <route> <from uri="direct:in"/> <!-- Set the HBase Row --> <setHeader headerName="CamelHBaseRowId"> <el>USD{in.body.id}</el> </setHeader> <!-- Set the HBase Value --> <setHeader headerName="CamelHBaseValue"> <el>USD{in.body.value}</el> </setHeader> <to uri="hbase:mytable?operation=CamelHBasePut&amp;family=myfamily&amp;qualifier=myqualifier"/> </route> The route above assumes that the message body contains an object that has an id and value property and will store the content of value in the HBase column myfamily:myqualifier in the row specified by id. If we needed to specify more than one column/value pairs we could just specify additional column mappings. Notice that you must use numbers from the 2nd header onwards, eg RowId2, RowId3, RowId4, etc. Only the 1st header does not have the number 1. <route> <from uri="direct:in"/> <!-- Set the HBase Row 1st column --> <setHeader headerName="CamelHBaseRowId"> <el>USD{in.body.id}</el> </setHeader> <!-- Set the HBase Row 2nd column --> <setHeader headerName="CamelHBaseRowId2"> <el>USD{in.body.id}</el> </setHeader> <!-- Set the HBase Value for 1st column --> <setHeader headerName="CamelHBaseValue"> <el>USD{in.body.value}</el> </setHeader> <!-- Set the HBase Value for 2nd column --> <setHeader headerName="CamelHBaseValue2"> <el>USD{in.body.othervalue}</el> </setHeader> <to uri="hbase:mytable?operation=CamelHBasePut&amp;family=myfamily&amp;qualifier=myqualifier&amp;family2=myfamily&amp;qualifier2=myqualifier2"/> </route> It is important to remember that you can use uri options, message headers or a combination of both. It is recommended to specify constants as part of the uri and dynamic values as headers. If something is defined both as header and as part of the uri, the header will be used. 143.5.2. Get Operations. A Get Operation is an operation that is used to retrieve one or more values from a specified HBase row. To specify what are the values that you want to retrieve you can just specify them as part of the uri or as message headers. <route> <from uri="direct:in"/> <!-- Set the HBase Row of the Get --> <setHeader headerName="CamelHBaseRowId"> <el>USD{in.body.id}</el> </setHeader> <to uri="hbase:mytable?operation=CamelHBaseGet&amp;family=myfamily&amp;qualifier=myqualifier&amp;valueType=java.lang.Long"/> <to uri="log:out"/> </route> In the example above the result of the get operation will be stored as a header with name CamelHBaseValue. 143.5.3. Delete Operations. You can also you camel-hbase to perform HBase delete operation. The delete operation will remove an entire row. All that needs to be specified is one or more rows as part of the message headers. <route> <from uri="direct:in"/> <!-- Set the HBase Row of the Get --> <setHeader headerName="CamelHBaseRowId"> <el>USD{in.body.id}</el> </setHeader> <to uri="hbase:mytable?operation=CamelHBaseDelete"/> </route> 143.5.4. Scan Operations. A scan operation is the equivalent of a query in HBase. You can use the scan operation to retrieve multiple rows. To specify what columns should be part of the result and also specify how the values will be converted to objects you can use either uri options or headers. <route> <from uri="direct:in"/> <to uri="hbase:mytable?operation=CamelHBaseScan&amp;family=myfamily&amp;qualifier=myqualifier&amp;valueType=java.lang.Long&amp;rowType=java.lang.String"/> <to uri="log:out"/> </route> In this case its probable that you also also need to specify a list of filters for limiting the results. You can specify a list of filters as part of the uri and camel will return only the rows that satisfy ALL the filters. To have a filter that will be aware of the information that is part of the message, camel defines the ModelAwareFilter. This will allow your filter to take into consideration the model that is defined by the message and the mapping strategy. When using a ModelAwareFilter camel-hbase will apply the selected mapping strategy to the in message, will create an object that models the mapping and will pass that object to the Filter. For example to perform scan using as criteria the message headers, you can make use of the ModelAwareColumnMatchingFilter as shown below. <route> <from uri="direct:scan"/> <!-- Set the Criteria --> <setHeader headerName="CamelHBaseFamily"> <constant>name</constant> </setHeader> <setHeader headerName="CamelHBaseQualifier"> <constant>first</constant> </setHeader> <setHeader headerName="CamelHBaseValue"> <el>in.body.firstName</el> </setHeader> <setHeader headerName="CamelHBaseFamily2"> <constant>name</constant> </setHeader> <setHeader headerName="CamelHBaseQualifier2"> <constant>last</constant> </setHeader> <setHeader headerName="CamelHBaseValue2"> <el>in.body.lastName</el> </setHeader> <!-- Set additional fields that you want to be return by skipping value --> <setHeader headerName="CamelHBaseFamily3"> <constant>address</constant> </setHeader> <setHeader headerName="CamelHBaseQualifier3"> <constant>country</constant> </setHeader> <to uri="hbase:mytable?operation=CamelHBaseScan&amp;filters=#myFilterList"/> </route> <bean id="myFilters" class="java.util.ArrayList"> <constructor-arg> <list> <bean class="org.apache.camel.component.hbase.filters.ModelAwareColumnMatchingFilter"/> </list> </constructor-arg> </bean> The route above assumes that a pojo is with properties firstName and lastName is passed as the message body, it takes those properties and adds them as part of the message headers. The default mapping strategy will create a model object that will map the headers to HBase columns and will pass that model the ModelAwareColumnMatchingFilter. The filter will filter out any rows, that do not contain columns that match the model. It is like query by example. 143.6. HBase Consumer The Camel HBase Consumer, will perform repeated scan on the specified HBase table and will return the scan results as part of the message. You can either specify header mapping (default) or body mapping. The later will just add the org.apache.camel.component.hbase.model.HBaseData as part of the message body. hbase://table[?options] You can specify the columns that you want to be return and their types as part of the uri options: hbase:mutable?family=name&qualifer=first&valueType=java.lang.String&family=address&qualifer=number&valueType2=java.lang.Integer&rowType=java.lang.Long The example above will create a model object that is consisted of the specified fields and the scan results will populate the model object with values. Finally the mapping strategy will be used to map this model to the camel message. 143.7. HBase Idempotent repository The camel-hbase component also provides an idempotent repository which can be used when you want to make sure that each message is processed only once. The HBase idempotent repository is configured with a table, a column family and a column qualifier and will create to that table a row per message. HBaseConfiguration configuration = HBaseConfiguration.create(); HBaseIdempotentRepository repository = new HBaseIdempotentRepository(configuration, tableName, family, qualifier); from("direct:in") .idempotentConsumer(header("messageId"), repository) .to("log:out); 143.8. HBase Mapping It was mentioned above that you the default mapping strategies are header and body mapping. Below you can find some detailed examples of how each mapping strategy works. 143.8.1. HBase Header mapping Examples The header mapping is the default mapping. To put the value "myvalue" into HBase row "myrow" and column "myfamily:mycolum" the message should contain the following headers: Header Value CamelHBaseRowId myrow CamelHBaseFamily myfamily CamelHBaseQualifier myqualifier CamelHBaseValue myvalue To put more values for different columns and / or different rows you can specify additional headers suffixed with the index of the headers, e.g: Header Value CamelHBaseRowId myrow CamelHBaseFamily myfamily CamelHBaseQualifier myqualifier CamelHBaseValue myvalue CamelHBaseRowId2 myrow2 CamelHBaseFamily2 myfamily CamelHBaseQualifier2 myqualifier CamelHBaseValue2 myvalue2 In the case of retrieval operations such as get or scan you can also specify for each column the type that you want the data to be converted to. For exampe: Header Value CamelHBaseFamily myfamily CamelHBaseQualifier myqualifier CamelHBaseValueType Long Please note that in order to avoid boilerplate headers that are considered constant for all messages, you can also specify them as part of the endpoint uri, as you will see below. 143.8.2. Body mapping Examples In order to use the body mapping strategy you will have to specify the option mappingStrategy as part of the uri, for example: hbase:mytable?mappingStrategyName=body To use the body mapping strategy the body needs to contain an instance of org.apache.camel.component.hbase.model.HBaseData. You can construct t HBaseData data = new HBaseData(); HBaseRow row = new HBaseRow(); row.setId("myRowId"); HBaseCell cell = new HBaseCell(); cell.setFamily("myfamily"); cell.setQualifier("myqualifier"); cell.setValue("myValue"); row.getCells().add(cell); data.addRows().add(row); The object above can be used for example in a put operation and will result in creating or updating the row with id myRowId and add the value myvalue to the column myfamily:myqualifier. The body mapping strategy might not seem very appealing at first. The advantage it has over the header mapping strategy is that the HBaseData object can be easily converted to or from xml/json. 143.9. See also Polling Consumer Apache HBase
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hbase</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "<bean id=\"hbase\" class=\"org.apache.camel.component.hbase.HBaseComponent\"> <property name=\"configuration\" ref=\"config\"/> </bean>", "hbase://table[?options]", "hbase:tableName", "<route> <from uri=\"direct:in\"/> <!-- Set the HBase Row --> <setHeader headerName=\"CamelHBaseRowId\"> <el>USD{in.body.id}</el> </setHeader> <!-- Set the HBase Value --> <setHeader headerName=\"CamelHBaseValue\"> <el>USD{in.body.value}</el> </setHeader> <to uri=\"hbase:mytable?operation=CamelHBasePut&amp;family=myfamily&amp;qualifier=myqualifier\"/> </route>", "<route> <from uri=\"direct:in\"/> <!-- Set the HBase Row 1st column --> <setHeader headerName=\"CamelHBaseRowId\"> <el>USD{in.body.id}</el> </setHeader> <!-- Set the HBase Row 2nd column --> <setHeader headerName=\"CamelHBaseRowId2\"> <el>USD{in.body.id}</el> </setHeader> <!-- Set the HBase Value for 1st column --> <setHeader headerName=\"CamelHBaseValue\"> <el>USD{in.body.value}</el> </setHeader> <!-- Set the HBase Value for 2nd column --> <setHeader headerName=\"CamelHBaseValue2\"> <el>USD{in.body.othervalue}</el> </setHeader> <to uri=\"hbase:mytable?operation=CamelHBasePut&amp;family=myfamily&amp;qualifier=myqualifier&amp;family2=myfamily&amp;qualifier2=myqualifier2\"/> </route>", "<route> <from uri=\"direct:in\"/> <!-- Set the HBase Row of the Get --> <setHeader headerName=\"CamelHBaseRowId\"> <el>USD{in.body.id}</el> </setHeader> <to uri=\"hbase:mytable?operation=CamelHBaseGet&amp;family=myfamily&amp;qualifier=myqualifier&amp;valueType=java.lang.Long\"/> <to uri=\"log:out\"/> </route>", "<route> <from uri=\"direct:in\"/> <!-- Set the HBase Row of the Get --> <setHeader headerName=\"CamelHBaseRowId\"> <el>USD{in.body.id}</el> </setHeader> <to uri=\"hbase:mytable?operation=CamelHBaseDelete\"/> </route>", "<route> <from uri=\"direct:in\"/> <to uri=\"hbase:mytable?operation=CamelHBaseScan&amp;family=myfamily&amp;qualifier=myqualifier&amp;valueType=java.lang.Long&amp;rowType=java.lang.String\"/> <to uri=\"log:out\"/> </route>", "<route> <from uri=\"direct:scan\"/> <!-- Set the Criteria --> <setHeader headerName=\"CamelHBaseFamily\"> <constant>name</constant> </setHeader> <setHeader headerName=\"CamelHBaseQualifier\"> <constant>first</constant> </setHeader> <setHeader headerName=\"CamelHBaseValue\"> <el>in.body.firstName</el> </setHeader> <setHeader headerName=\"CamelHBaseFamily2\"> <constant>name</constant> </setHeader> <setHeader headerName=\"CamelHBaseQualifier2\"> <constant>last</constant> </setHeader> <setHeader headerName=\"CamelHBaseValue2\"> <el>in.body.lastName</el> </setHeader> <!-- Set additional fields that you want to be return by skipping value --> <setHeader headerName=\"CamelHBaseFamily3\"> <constant>address</constant> </setHeader> <setHeader headerName=\"CamelHBaseQualifier3\"> <constant>country</constant> </setHeader> <to uri=\"hbase:mytable?operation=CamelHBaseScan&amp;filters=#myFilterList\"/> </route> <bean id=\"myFilters\" class=\"java.util.ArrayList\"> <constructor-arg> <list> <bean class=\"org.apache.camel.component.hbase.filters.ModelAwareColumnMatchingFilter\"/> </list> </constructor-arg> </bean>", "hbase://table[?options]", "hbase:mutable?family=name&qualifer=first&valueType=java.lang.String&family=address&qualifer=number&valueType2=java.lang.Integer&rowType=java.lang.Long", "HBaseConfiguration configuration = HBaseConfiguration.create(); HBaseIdempotentRepository repository = new HBaseIdempotentRepository(configuration, tableName, family, qualifier); from(\"direct:in\") .idempotentConsumer(header(\"messageId\"), repository) .to(\"log:out);", "hbase:mytable?mappingStrategyName=body", "HBaseData data = new HBaseData(); HBaseRow row = new HBaseRow(); row.setId(\"myRowId\"); HBaseCell cell = new HBaseCell(); cell.setFamily(\"myfamily\"); cell.setQualifier(\"myqualifier\"); cell.setValue(\"myValue\"); row.getCells().add(cell); data.addRows().add(row);" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hbase-component
Chapter 2. 3scale API Management operations and scaling
Chapter 2. 3scale API Management operations and scaling Note This document is not intended for local installations on laptops or similar end user equipment. This section describes operations and scaling tasks of a Red Hat 3scale API Management 2.15 installation. Prerequisites An installed and initially configured 3scale On-premises instance on a supported OpenShift version . To carry out 3scale operations and scaling tasks, perform the steps outlined in the following sections: Redeploying APIcast Scaling up 3scale API Management on-premise Operations troubleshooting 2.1. Redeploying APIcast You can test and promote system changes through the 3scale Admin Portal. Prerequisites A deployed instance of 3scale On-premises. You have chosen your APIcast deployment method. By default, APIcast deployments on OpenShift, both embedded and on other OpenShift clusters, are configured to allow you to publish changes to your staging and production gateways through the 3scale Admin Portal. To redeploy APIcast on OpenShift: Procedure Make system changes. In the Admin Portal, deploy to staging and test. In the Admin Portal, promote to production. By default, APIcast retrieves and publishes the promoted update once every 5 minutes. If you are using APIcast on the Docker containerized environment or a native installation, configure your staging and production gateways, and indicate how often the gateway retrieves published changes. After you have configured your APIcast gateways, you can redeploy APIcast through the 3scale Admin Portal. To redeploy APIcast on the Docker containerized environment or a native installations: Procedure Configure your APIcast gateway and connect it to 3scale On-premises. Make system changes. In the Admin Portal, deploy to staging and test. In the Admin Portal, promote to production. APIcast retrieves and publishes the promoted update at the configured frequency. 2.2. Scaling up 3scale API Management On-premise As your APIcast deployment grows, you may need to increase the amount of storage available. How you scale up storage depends on which type of file system you are using for your persistent storage. If you are using a network file system (NFS), you can scale up your persistent volume (PV) using this command: USD oc edit pv <pv_name> If you are using any other storage method, you must scale up your persistent volume manually using one of the methods listed in the following sections. 2.2.1. Method 1: Backing up and swapping persistent volumes Procedure Back up the data on your existing persistent volume. Create and attach a target persistent volume, scaled for your new size requirements. Create a pre-bound persistent volume claim, specify: The size of your new PVC (PersistentVolumeClaim) and the persistent volume name using the volumeName field. Restore data from your backup onto your newly created PV. Modify your deployment configuration with the name of your new PV: USD oc edit deployment/system-app Verify your new PV is configured and working correctly. Delete your PVC to release its claimed resources. 2.2.2. Method 2: Backing up and redeploying 3scale API Management Procedure Back up the data on your existing persistent volume. Shut down your 3scale pods. Create and attach a target persistent volume, scaled for your new size requirements. Restore data from your backup onto your newly created PV. Create a pre-bound persistent volume claim. Specify: The size of your new PVC The persistent volume name using the volumeName field. Deploy your amp.yml . Verify your new PV is configured and working correctly. Delete your PVC to release its claimed resources. 2.2.3. Configuring 3scale API Management on-premise deployments The key deployment configurations to be scaled for 3scale are: APIcast production Backend listener Backend worker 2.2.3.1. Scaling via the OCP Via OpenShift Container Platform (OCP) using an APIManager CR, you can scale the deployment configuration either up or down. To scale a particular deployment configuration, use the following: Scale up an APIcast production deployment configuration with the following APIManager CR: apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: apicast: productionSpec: replicas: X Scale up the backend listener, backend worker, and backend cron components of your deployment configuration with the following APIManager CR: apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: backend: listenerSpec: replicas: X workerSpec: replicas: Y cronSpec: replicas: Z Set the appropriate environment variable to the desired number of processes per pod. PUMA_WORKERS for backend-listener pods: USD oc set env deployment/backend-listener --overwrite PUMA_WORKERS=<number_of_processes> UNICORN_WORKERS for system-app pods: USD oc set env deployment/system-app --overwrite UNICORN_WORKERS=<number_of_processes> 2.2.3.2. Vertical and horizontal hardware scaling You can increase the performance of your 3scale deployment on OpenShift by adding resources. You can add more compute nodes as pods to your OpenShift cluster, as horizontal scaling or you can allocate more resources to existing compute nodes as vertical scaling. Horizontal scaling You can add more compute nodes as pods to your OpenShift. If the additional compute nodes match the existing nodes in your cluster, you do not have to reconfigure any environment variables. Vertical scaling You can allocate more resources to existing compute nodes. If you allocate more resources, you must add additional processes to your pods to increase performance. Note Avoid the use of computing nodes with different specifications and configurations in your 3scale deployment. 2.2.3.3. Scaling up routers As traffic increases, ensure your Red Hat OCP routers can adequately handle requests. If your routers are limiting the throughput of your requests, you must scale up your router nodes. 2.3. Operations troubleshooting This section explains how to configure 3scale audit logging to display on OpenShift, and how to access 3scale logs and job queues on OpenShift. 2.3.1. Configuring 3scale API Management audit logging on OpenShift This enables all logs to be in one place for querying by Elasticsearch, Fluentd, and Kibana (EFK) logging tools. These tools provide increased visibility on changes made to your 3scale configuration, who made these changes, and when. For example, this includes changes to billing, application plans, application programming interface (API) configuration, and more. Prerequisites A 3scale 2.15 deployment. Procedure Configure audit logging to stdout to forward all application logs to standard OpenShift pod logs. Some considerations: By default, audit logging to stdout is disabled when 3scale is deployed on-premises; you need to configure this feature to have it fully functional. Audit logging to stdout is not available for 3scale hosted. 2.3.2. Enabling audit logging 3scale uses a features.yml configuration file to enable some global features. To enable audit logging to stdout , you must mount this file from a ConfigMap to replace the default file. The OpenShift pods that depend on features.yml are system-app and system-sidekiq . Prerequisites You must have administrator access for the 3scale project. Procedure Enter the following command to enable audit logging to stdout : USD oc patch configmap system -p '{"data": {"features.yml": "features: &default\n logging:\n audits_to_stdout: true\n\nproduction:\n <<: *default\n"}}' Export the following environment variable: USD export PATCH_SYSTEM_VOLUMES='{"spec":{"template":{"spec":{"volumes":[{"emptyDir":{"medium":"Memory"},"name":"system-tmp"},{"configMap":{"items":[{"key":"zync.yml","path":"zync.yml"},{"key":"rolling_updates.yml","path":"rolling_updates.yml"},{"key":"service_discovery.yml","path":"service_discovery.yml"},{"key":"features.yml","path":"features.yml"}],"name":"system"},"name":"system-config"}]}}}}' Enter the following command to apply the updated deployment configuration to the relevant OpenShift pods: USD oc patch deployment system-app -p USDPATCH_SYSTEM_VOLUMES USD oc patch deployment system-sidekiq -p USDPATCH_SYSTEM_VOLUMES 2.3.3. Configuring logging for Red Hat OpenShift When you have enabled audit logging to forward 3scale application logs to OpenShift, you can use logging tools to monitor your 3scale applications. For details on configuring logging on Red Hat OpenShift, see the following: Understanding the logging subsystem for Red Hat OpenShift 2.3.4. Accessing your logs Each component's deployment configuration contains logs for access and exceptions. If you encounter issues with your deployment, check these logs for details. Follow these steps to access logs in 3scale: Procedure Find the ID of the pod you want logs for: USD oc get pods Enter oc logs and the ID of your chosen pod: USD oc logs <pod> The system pod has two containers, each with a separate log. To access a container's log, specify the --container parameter with the system-provider and system-developer pods: USD oc logs <pod> --container=system-provider USD oc logs <pod> --container=system-developer 2.3.5. Checking job queues Job queues contain logs of information sent from the system-sidekiq pods. Use these logs to check if your cluster is processing data. You can query the logs using the OpenShift CLI: USD oc get jobs USD oc logs <job> 2.3.6. Preventing monotonic growth To prevent monotonic growth, 3scale schedules by default, automatic purging of the following tables: user_sessions Clean up is triggered once a week and deletes records older than two weeks. audits Clean up is triggered once a day and deletes records older than three months. log_entries Clean up triggered once a day and deletes records older than six months. event_store_events Clean up is triggered once a week and deletes records older than a week. With the exception of the above listed tables, the following table requires manual purging by the database administrator: alerts Table 2.1. SQL purging commands Database type SQL command MySQL DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL 14 DAY; PostgreSQL DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL '14 day'; Oracle DELETE FROM alerts WHERE timestamp <= TRUNC(SYSDATE) - 14; Additional resources OCP documentation Automatically scaling pods Adding Compute Nodes Optimizing Routing
[ "oc edit pv <pv_name>", "oc edit deployment/system-app", "apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: apicast: productionSpec: replicas: X", "apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: backend: listenerSpec: replicas: X workerSpec: replicas: Y cronSpec: replicas: Z", "oc set env deployment/backend-listener --overwrite PUMA_WORKERS=<number_of_processes>", "oc set env deployment/system-app --overwrite UNICORN_WORKERS=<number_of_processes>", "oc patch configmap system -p '{\"data\": {\"features.yml\": \"features: &default\\n logging:\\n audits_to_stdout: true\\n\\nproduction:\\n <<: *default\\n\"}}'", "export PATCH_SYSTEM_VOLUMES='{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"emptyDir\":{\"medium\":\"Memory\"},\"name\":\"system-tmp\"},{\"configMap\":{\"items\":[{\"key\":\"zync.yml\",\"path\":\"zync.yml\"},{\"key\":\"rolling_updates.yml\",\"path\":\"rolling_updates.yml\"},{\"key\":\"service_discovery.yml\",\"path\":\"service_discovery.yml\"},{\"key\":\"features.yml\",\"path\":\"features.yml\"}],\"name\":\"system\"},\"name\":\"system-config\"}]}}}}'", "oc patch deployment system-app -p USDPATCH_SYSTEM_VOLUMES oc patch deployment system-sidekiq -p USDPATCH_SYSTEM_VOLUMES", "oc get pods", "oc logs <pod>", "oc logs <pod> --container=system-provider oc logs <pod> --container=system-developer", "oc get jobs", "oc logs <job>", "DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL 14 DAY;", "DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL '14 day';", "DELETE FROM alerts WHERE timestamp <= TRUNC(SYSDATE) - 14;" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/threescale-operations-scaling
Chapter 4. Deployment workflow
Chapter 4. Deployment workflow The workflow for deploying Red Hat Hyperconverged Infrastructure for Virtualization is as follows: Check requirements. Verify that your planned deployment meets support requirements: Requirements , and fill in the installation checklist so that you can refer to it during the deployment process. Install operating systems. Install an operating system on each physical machine that will act as a hyperconverged host: Installing hyperconverged hosts . (Optional) Install an operating system on each physical or virtual machine that will act as an Network-Bound Disk Encryption (NBDE) key server: Installing NBDE key servers . Configure authentication between hyperconverged hosts. Configure key-based SSH authentication without a password to enable automated configuration of the hosts: Configure key-based SSH authentication . (Optional) Configure disk encryption. Configure NBDE key servers . Configure hyperconverged hosts as NBDE clients . Configure the hyperconverged cluster: Configure Red Hat Gluster Storage on hyperconverged hosts using the Web Console . Deploy the Hosted Engine virtual machine using the web console . Configure hyperconverged nodes using the RHV Administration Portal .
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-deploy-workflow
Part VIII. System Backup and Recovery
Part VIII. System Backup and Recovery This part describes how to use the Relax-and-Recover (ReaR) disaster recovery and system migration utility.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/part-system_backup_and_recovery
Red Hat OpenStack Platform Hardware Bare Metal Certification Policy Guide
Red Hat OpenStack Platform Hardware Bare Metal Certification Policy Guide Red Hat Hardware Certification 2025 For Use with Red Hat OpenStack Platform 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openstack_platform_hardware_bare_metal_certification_policy_guide/index
Chapter 5. Advisories related to this release
Chapter 5. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2023:4157 RHSA-2023:4158 RHSA-2023:4161 RHSA-2023:4162 RHSA-2023:4163 RHSA-2023:4164 RHSA-2023:4165 RHSA-2023:4175 RHSA-2023:4208 RHSA-2023:4233 Revised on 2024-05-09 16:47:39 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/rn-openjdk11020-advisory_openjdk
Chapter 1. OpenShift Container Platform storage overview
Chapter 1. OpenShift Container Platform storage overview OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. 1.1. Glossary of common terms for OpenShift Container Platform storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Cinder The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes. Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Fiber channel A networking technology that is used to transfer data among data centers, computer servers, switches and storage. FlexVolume FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes. fsGroup The fsGroup defines a file system group ID of a pod. iSCSI Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. NFS A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. VMware vSphere's Virtual Machine Disk (VMDK) volumes Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines. 1.2. Storage types OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage/storage-overview
Deploying OpenShift Data Foundation using IBM Z
Deploying OpenShift Data Foundation using IBM Z Red Hat OpenShift Data Foundation 4.17 Instructions on deploying Red Hat OpenShift Data Foundation to use local storage on IBM Z Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on IBM Z. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices External mode Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services and all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow these steps in the order given: Install the Red Hat OpenShift Data Foundation Operator . Install Local Storage Operator . Find the available storage devices . Create the OpenShift Data Foundation cluster service on IBM Z . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . 1.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: Chapter 2. Deploy OpenShift Data Foundation using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. Follow this deployment method to use local storage to back persistent volumes for your OpenShift Container Platform applications. Use this section to deploy OpenShift Data Foundation on IBM Z infrastructure where OpenShift Container Platform is already installed. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.3. Finding available storage devices (optional) This step is additional information and can be skipped as the disks are automatically discovered during storage cluster creation. Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating Persistent Volumes (PV) for IBM Z. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the unique by-id device name for each available raw block device. Example output: In this example, for bmworker01 , the available local device is sdb . Identify the unique ID for each of the devices selected in Step 2. In the above example, the ID for the local device sdb Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.4. Enabling DASD devices If you are using DASD devices you must enable them before creating an OpenShift Data Foundation cluster on IBM Z. Once the DASD devices are available to z/VM guests, complete the following steps from the compute or infrastructure node on which an OpenShift Data Foundation storage node is being installed. Procedure To enable the DASD device, run the following command: 1 For <device_bus_id>, specify the id of the DASD device bus-ID. For example 0.0.b100 . To verify the status of the DASD device you can use the the lsdasd and`lsblk` commands. To low-level format the device and specify the disk name, run the following command: 1 For <device_name>, specify the disk name. For example dasdb . Important The use of DASD quick-formatting Extent Space Efficient (ESE) DASD is not supported on OpenShift Data Foundation. If you are using ESE DASDs, make sure to disable quick-formatting with the --mode=full parameter. To auto-create one partition using the whole disk, run the following command: 1 For <device_name>, enter the disk name you have specified in the step. For example dasdb . Once these steps are completed, the device is available during OpenShift Data Foundation deployment as /dev/dasdb1 . Important During LocalVolumeSet creation, make sure to select only the Part option as device type. Additional resources For details on the commands, see Commands for Linux on IBM Z in IBM documentation. 2.5. Creating OpenShift Data Foundation cluster on IBM Z Use this procedure to create an OpenShift Data Foundation cluster on IBM Z. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have at least three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Z or IBM(R) LinuxONE. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices for Backing storage type option. Select Full Deployment for the Deployment type option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVME . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. Note For a multi-path device, select the Mpath option from the drop-down exclusively. For a DASD-based cluster, ensure that only the Part option is included in the Device Type and remove the 'Disk' option. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. You can check the box to select Taint nodes. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide CA Certificate , Client Certificate and Client Private Key . Click Save . Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Z. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page:: Review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node csi-addons-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard under Overview tab, verify that both Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 5. Deploying standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: * Installing the Local Storage Operator. * Installing Red Hat OpenShift Data Foundation Operator * Creating standalone Multicloud Object Gateway :leveloffset: +2 Part I. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and select the same. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator by using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2. Creating standalone Multicloud Object Gateway on IBM Z You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, see Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators -> Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node) Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal-attached devices mode Use the steps in this section to uninstall OpenShift Data Foundation. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful The following table provides information on the different values that can used with these annotations: Table 7.1. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the administrator/user removes the Persistent Volume Claims (PVCs) and Object Bucket Claims (OBCs) mode forced No Rook and NooBaa proceeds with uninstall even if the PVCs/OBCs provisioned using Rook and NooBaa exist respectively Edit the value of the annotation to change the cleanup policy or the uninstall mode. Expected output for both commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces. From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. <VOLUME-SNAPSHOT-NAME> Is the name of the volume snapshot <NAMESPACE> Is the project namespace Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you want to delete the Storage Cluster without deleting the PVCs, you can set the uninstall mode annotation to forced and skip this step. Doing so results in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete the other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs that are used internally by OpenShift Data Foundation. Note Omit RGW_PROVISIONER for cloud platforms. Delete the OBCs. <obc-name> Is the name of the OBC <project-name> Is the name of the project Delete the PVCs. <pvc-name> Is the name of the PVC <project-name> Is the name of the project Note Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster. Delete the Storage System object and wait for the removal of the associated resources. Check the cleanup pods if the uninstall.ocs.openshift.io/cleanup-policy was set to delete (default) and ensure that their status is Completed . Example output: Confirm that the directory /var/lib/rook is now empty. This directory is empty only if the uninstall.ocs.openshift.io/cleanup-policy annotation was set to delete (default). If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSDs on all the OpenShift Data Foundation nodes. Create a debug pod and chroot to the host on the storage node. <node-name> Is the name of the node Get Device names and make note of the OpenShift Data Foundation devices. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. <PID> Is the process ID Verify that the device name is removed. Delete the namespace and wait till the deletion is complete. You need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Delete local storage operator configurations if you have deployed OpenShift Data Foundation using local storage devices. See Removing local storage operator configurations . Unlabel the storage nodes. Remove the OpenShift Data Foundation taint if the nodes were tainted. Confirm all the Persistent volumes (PVs) provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. <pv-name> Is the name of the PV Remove the CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely, on the OpenShift Container Platform Web Console, Click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 7.1.1. Removing local storage operator configurations Use the instructions in this section only if you have deployed OpenShift Data Foundation using local storage devices. Note For OpenShift Data Foundation deployments only using localvolume resources, go directly to step 8. Procedure Identify the LocalVolumeSet and the corresponding StorageClassName being used by OpenShift Data Foundation. Set the variable SC to the StorageClass providing the LocalVolumeSet . List and note the devices to be cleaned up later. Inorder to list the device ids of the disks, follow the procedure mentioned here, See Find the available storage devices . Example output: Delete the LocalVolumeSet . Delete the local storage PVs for the given StorageClassName . Delete the StorageClassName . Delete the symlinks created by the LocalVolumeSet . Delete LocalVolumeDiscovery . Remove the LocalVolume resources (if any). Use the following steps to remove the LocalVolume resources that were used to provision PVs in the current or OpenShift Data Foundation version. Also, ensure that these resources are not being used by other tenants on the cluster. For each of the local volumes, do the following: Identify the LocalVolume and the corresponding StorageClassName being used by OpenShift Data Foundation. Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass For example: List and note the devices to be cleaned up later. Example output: Delete the local volume resource. Delete the remaining PVs and StorageClasses if they exist. Clean up the artifacts from the storage nodes for that resource. Example output: Wipe the disks for each of the local volumesets or local volumes listed in step 1 and 8 respectively so that they can be reused. List the storage nodes. Example output: Obtain the node console and execute chroot /host command when the prompt appears. Store the disk paths in the DISKS variable within quotes. For the list of disk paths, see step 3 and step 8.c for local volumeset and local volume respectively. Example output: Run sgdisk --zap-all on all the disks. Example output: Exit the shell and repeat for the other nodes. Delete the openshift-local-storage namespace and wait till the deletion is complete. You will need to switch to another project if the openshift-local-storage namespace is the active project. For example: The project is deleted if the following command returns a NotFound error. 7.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For more information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Example output: Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. Delete the relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. <pvc-name> Is the name of the PVC 7.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see Image registry . The Persistent Volume Claims (PVCs) that are created as a part of configuring the OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry must have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. <pvc-name> Is the name of the PVC 7.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC
[ "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get nodes -l=cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2", "oc debug node/<node name>", "oc debug node/bmworker01 Starting pod/bmworker01-debug To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 500G 0 loop sda 8:0 0 120G 0 disk |-sda1 8:1 0 384M 0 part /boot `-sda4 8:4 0 119.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot sdb 8:16 0 500G 0 disk", "sh-4.4#ls -l /dev/disk/by-id/ | grep sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-360050763808104bc2800000000000259 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-SIBM_2145_00e020412f0aXX00 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-0x60050763808104bc2800000000000259 -> ../../sdb", "scsi-0x60050763808104bc2800000000000259", "sudo chzdev -e <device_bus_id> 1", "sudo dasdfmt /dev/<device_name> -b 4096 -p --mode=full 1", "sudo fdasd -a /dev/<device_name> 1", "spec: flexibleScaling: true [...] status: failureDomain: host", "oc annotate namespace openshift-storage openshift.io/node-selector=", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem", "oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy=\"retain\" --overwrite", "oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode=\"forced\" --overwrite", "storagecluster.ocs.openshift.io/ocs-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc-name> -n <project-name>", "oc delete pvc <pvc-name> -n <project-name>", "oc delete -n openshift-storage storagesystem --all --wait=true", "oc get pods -n openshift-storage | grep -i cleanup", "NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s", "for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host ls -l /var/lib/rook; done", "oc debug node/ <node-name>", "chroot /host", "dmsetup ls", "ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)", "cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc project default", "oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc label nodes --all cluster.ocs.openshift.io/openshift-storage-", "oc label nodes --all topology.rook.io/rack-", "oc adm taint nodes --all node.ocs.openshift.io/storage-", "oc get pv", "oc delete pv <pv-name>", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m", "oc get localvolumesets.local.storage.openshift.io -n openshift-local-storage", "export SC=\"<StorageClassName>\"", "/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3", "oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storage", "oc get pv | grep USDSC | awk '{print USD1}'| xargs oc delete pv", "oc delete sc USDSC", "[[ ! -z USDSC ]] && for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host rm -rfv /mnt/local-storage/USD{SC}/; done", "oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storage", "oc get localvolume.local.storage.openshift.io -n openshift-local-storage", "LV=local-block SC=localblock", "oc get localvolume -n openshift-local-storage USDLV -o jsonpath='{ .spec.storageClassDevices[].devicePaths[] }{\"\\n\"}'", "/dev/sdb /dev/sdc /dev/sdd /dev/sde", "oc delete localvolume -n openshift-local-storage --wait=true USDLV", "oc delete pv -l storage.openshift.com/local-volume-owner-name=USD{LV} --wait --timeout=5m oc delete storageclass USDSC --wait --timeout=5m", "[[ ! -z USDSC ]] && for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host rm -rfv /mnt/local-storage/USD{SC}/; done", "Starting pod/node-xxx-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod Starting pod/node-yyy-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod Starting pod/node-zzz-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod", "get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8", "oc debug node/node-xxx Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` Pod IP: w.x.y.z If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host", "sh-4.4# DISKS=\"/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3 \" or sh-4.2# DISKS=\"/dev/sdb /dev/sdc /dev/sdd /dev/sde \".", "sh-4.4# for disk in USDDISKS; do sgdisk --zap-all USDdisk;done", "Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.", "sh-4.4# exit exit sh-4.2# exit exit Removing debug pod", "oc project default oc delete project openshift-local-storage --wait=true --timeout=5m", "oc get project openshift-local-storage", "oc get pod,pvc -n openshift-monitoring", "NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: emptyDir: {} . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_openshift_data_foundation_using_ibm_z/index
Chapter 12. Volume cloning
Chapter 12. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 12.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/volume-cloning_rhodf
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.17/making-open-source-more-inclusive
13.10. Keyboard Configuration
13.10. Keyboard Configuration To add multiple keyboard layouts to your system, select Keyboard from the Installation Summary screen. Upon saving, the keyboard layouts are immediately available in the installation program and you can switch between them by using the keyboard icon located at all times in the upper right corner of the screen. Initially, only the language you selected in the welcome screen is listed as the keyboard layout in the left pane. You can either replace the initial layout or add more layouts. However, if your language does not use ASCII characters, you might need to add a keyboard layout that does, to be able to properly set a password for an encrypted disk partition or the root user, among other things. Figure 13.7. Keyboard Configuration To add an additional layout, click the + button, select it from the list, and click Add . To delete a layout, select it and click the - button. Use the arrow buttons to arrange the layouts in order of preference. For a visual preview of the keyboard layout, select it and click the keyboard button. To test a layout, use the mouse to click inside the text box on the right. Type some text to confirm that your selection functions correctly. To test additional layouts, you can click the language selector at the top on the screen to switch them. However, it is recommended to set up a keyboard combination for switching layout. Click the Options button at the right to open the Layout Switching Options dialog and choose a combination from the list by selecting its check box. The combination will then be displayed above the Options button. This combination applies both during the installation and on the installed system, so you must configure a combination here in order to use one after installation. You can also select more than one combination to switch between layouts. Important If you use a layout that cannot accept Latin characters, such as Russian , Red Hat recommends additionally adding the English (United States) layout and configuring a keyboard combination to switch between the two layouts. If you only select a layout without Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This can prevent you from completing the installation. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your keyboard configuration after you have completed the installation, visit the Keyboard section of the Settings dialogue window.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-keyboard-configuration-ppc
Chapter 3. Navigating the Management CLI
Chapter 3. Navigating the Management CLI Many common terminal commands are available in the management CLI, such as ls to list the contents of a node path , cd to change the node path , and pwd to print the full node path . The management CLI also supports keyboard shortcuts . 3.1. Change the Current Path You can change to a different node path by using the cd command and providing the desired path. When the management CLI is first launched, it is at the root level ( / ). 3.2. Print the Current Path You can print the path of the current node by using the pwd command. When the management CLI is first launched, the path is at the root level ( / ). The above example changes the path using the cd command and then outputs the following to the console: 3.3. List Contents You can list the contents of a particular node path by using the ls command. If the path ends on a node name, that resource's attributes will be listed as well. The below example navigates the standard-sockets socket binding group and then lists its contents. The same result can be achieved from anywhere in the resource tree hierarchy by specifying the node path to the ls command. You can also use the --resolve-expressions parameter to resolve the expressions of the returned attributes to their value on the server. In this example, the port-offset attribute shows its resolved value ( 0 ) instead of the expression ( USD{jboss.socket.binding.port-offset:0} ). 3.4. View Output When you run the management CLI in interactive mode and the operation results in multiple pages of output, the command processor pauses the screen at the end of the first page. This allows you to page through the output one line or page at a time. The occurrence of multiple pages of output is indicated by a line of text displaying --More( NNN %)-- at the end of the output. The following is an example of a management CLI command that provides more than one page of output. Navigating Output When you encounter the line of text indicating that there is more output, you can proceed using one of the following options. Press Enter or the down arrow to page through the output one line at a time. Press the Spacebar or PgDn to skip to the page of output. Press PgUp to return to the page of output. Press Home to return to the beginning of the output. Press End to skip to the final line of the output. Type q to interrupt the command and exit. Note On Windows the PgUp , PgDn , Home , and End keys are available beginning with Windows Server 2016. There are no issues with other operating systems. Searching Output You can search for text within output. Use a forward slash ( / ) to initiate searching. Type the desired text and press Enter to search. Press n to go to the match. Press N to go to the match. You can also use the up and down arrows to browse through the search history. 3.5. Use Keyboard Navigation Shortcuts When running the management CLI in interactive mode, you can use keyboard shortcuts to quickly edit a management CLI command. Note You can also use the Tab key to autocomplete a portion of a management CLI command or view the available options. The keyboard shortcuts you can use vary depending on which supported platform you are running: Red Hat Enterprise Linux Windows Server Solaris Table 3.1. Red Hat Enterprise Linux Keyboard Navigation Shortcuts Navigation Keyboard Shortcut Left one word Alt+B or Ctrl+left arrow Right one word Alt+F or Ctrl+right arrow Beginning of the line Ctrl+A or Home End of the line Ctrl+E or End Left one character Ctrl+B or left arrow Right one character Ctrl+F or right arrow Table 3.2. Windows Server Keyboard Navigation Shortcuts Navigation Keyboard Shortcut Left one word Alt+B Right one word Alt+F Beginning of the line Ctrl+A or Home End of the line Ctrl+E or End Left one character Ctrl+B or left arrow Right one character Ctrl+F or right arrow Table 3.3. Solaris Keyboard Navigation Shortcuts Navigation Keyboard Shortcut Left one word Alt+B or Ctrl+left arrow Right one word Alt+F or Ctrl+right arrow Beginning of the line Ctrl+A or Home End of the line Ctrl+E or End Left one character Ctrl+B or left arrow Right one character Ctrl+F or right arrow
[ "cd /subsystem=datasources cd data-source=ExampleDS", "cd /subsystem=undertow cd server=default-server pwd", "/subsystem=undertow/server=default-server", "cd /socket-binding-group=standard-sockets ls -l", "ATTRIBUTE VALUE TYPE default-interface public STRING name standard-sockets STRING port-offset USD{jboss.socket.binding.port-offset:0} INT CHILD MIN-OCCURS MAX-OCCURS local-destination-outbound-socket-binding n/a n/a remote-destination-outbound-socket-binding n/a n/a socket-binding n/a n/a", "ls -l /socket-binding-group=standard-sockets", "ATTRIBUTE VALUE TYPE default-interface public STRING name standard-sockets STRING port-offset USD{jboss.socket.binding.port-offset:0} INT CHILD MIN-OCCURS MAX-OCCURS local-destination-outbound-socket-binding n/a n/a remote-destination-outbound-socket-binding n/a n/a socket-binding n/a n/a", "ls -l /socket-binding-group=standard-sockets --resolve-expressions", "ATTRIBUTE VALUE TYPE default-interface public STRING name standard-sockets STRING port-offset 0 INT CHILD MIN-OCCURS MAX-OCCURS local-destination-outbound-socket-binding n/a n/a remote-destination-outbound-socket-binding n/a n/a socket-binding n/a n/a", "/subsystem=undertow:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"default-security-domain\" => \"other\", \"default-server\" => \"default-server\", \"default-servlet-container\" => \"default\", \"default-virtual-host\" => \"default-host\", \"instance-id\" => expression \"USD{jboss.node.name}\", \"statistics-enabled\" => false, \"application-security-domain\" => {\"other\" => { \"enable-jacc\" => false, \"http-authentication-factory\" => \"application-http-authentication\", \"override-deployment-config\" => false, \"setting\" => undefined }}, \"buffer-cache\" => {\"default\" => { \"buffer-size\" => 1024, \"buffers-per-region\" => 1024, --More(7%)--" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/management_cli_guide/navigating_cli
Chapter 3. Red Hat OpenStack deployment best practices
Chapter 3. Red Hat OpenStack deployment best practices Review the following best practices when you plan and prepare to deploy OpenStack. You can apply one or more of these practices in your environment. 3.1. Red Hat OpenStack deployment preparation Before you deploy Red Hat OpenStack Platform (RHOSP), review the following list of deployment preparation tasks. You can apply one or more of the deployment preparation tasks in your environment: Set a subnet range for introspection to accommodate the maximum overcloud nodes for which you want to perform introspection at a time When you use director to deploy and configure RHOSP, use CIDR notations for the control plane network to accommodate all overcloud nodes that you add now or in the future. Enable Jumbo Frames for preferred networks When a high-use network uses jumbo frames or a higher MTU, the network can send larger datagrams or TCP payloads and reduce the CPU overhead for higher bandwidth. Enable jumbo frames only for networks that have network switch support for higher MTU. Standard networks that are known to give better performance with higher MTU are the Tenant network, Storage network and the Storage Management network. For more information, see Configuring jumbo frames in Installing and managing Red Hat OpenStack Platform with director . Set the World Wide Name (WWN) as the root disk hint for each node to prevent nodes from using the wrong disk during deployment and booting When nodes contain multiple disks, use the introspection data to set the WWN as the root disk hint for each node. This prevents the node from using the wrong disk during deployment and booting. For more information, see Defining the Root Disk for multi-disk Ceph clusters in the Installing and managing Red Hat OpenStack Platform with director guide. Enable the Bare Metal service (ironic) automated cleaning on nodes that have more than one disk Use the Bare Metal service automated cleaning to erase metadata on nodes that have more than one disk and are likely to have multiple boot loaders. Nodes might become inconsistent with the boot disk due to the presence of multiple bootloaders on disks, which leads to node deployment failure when you attempt to pull the metadata that uses the wrong URL. To enable the Bare Metal service automated cleaning, on the undercloud node, edit the undercloud.conf file and add the following line: Limit the number of nodes for Bare Metal (ironic) introspection If you perform introspection on all nodes at the same time, failures might occur due to network constraints. Perform introspection on up to 50 nodes at a time. Ensure that the dhcp_start and dhcp_end range in the undercloud.conf file is large enough for the number of nodes that you expect to have in the environment. If there are insufficient available IPs, do not issue more than the size of the range. This limits the number of simultaneous introspection operations. To allow the introspection DHCP leases to expire, do not issue more IP addresses for a few minutes after the introspection completes. 3.2. Red Hat OpenStack deployment configuration Review the following list of recommendations for your Red Hat OpenStack Platform(RHOSP) deployment configuration: Validate the heat templates with a small scale deployment Deploy a small environment that consists of at least three Controllers, one Compute note, and three Ceph Storage nodes. You can use this configuration to ensure that all of your heat templates are correct. Improve instance distribution across Compute During the creation of a large number of instances, the Compute scheduler does not know the resources of a Compute node until the resource allocation of instances is confirmed for the Compute node. To avoid the uneven spawning of Compute nodes, you can perform one of the following actions: Set the value of the NovaSchedulerShuffleBestSameWeighedHosts parameter to true : To ensure that a Compute node is not overloaded with instances, set max_instances_per_host to the maximum number of instances that any Compute node can spawn and ensure that the NumInstancesFilter parameter is enabled. When this instance count is reached by a Compute node, then the scheduler will no longer select it for further instance spawn scheduling. Note The NumInstancesFilter parameter is enabled by default. But if you modify the NovaSchedulerEnabledFilters parameter in the environment files, ensure that you enable the NumInstancesFilter parameter. Replace <maximum_number_of_instances> with the maximum number of instances that any Compute node can spawn. Scale configurations for the Networking service (neutron) The settings in Table 3.1. were tested and validated to improve performance and scale stability on a large-scale openstack environment. The server-side probe intervals control the timeout for probes sent by ovsdb-server to the clients: neutron , ovn-controller , and ovn-metadata-agent . If they do not get a reply from the client before the timeout elapses, they will disconnect from the client, forcing it to reconnect. The most likely scenario for a client to timeout is upon the initial connection to the ovsdb-server , when the client loads a copy of the database into memory. When the timeout is too low, the ovsdb-server disconnects the client while it is downloading the database, causing the client to reconnect and try again and this cycle repeats forever. Therefore, if the maximum timeout interval does not work then set the probe interval value to zero to disable the probe. If the client-side probe intervals are disabled, they use TCP keepalive messages to monitor their connections to the ovsdb-server . Note Always use tripleo heat template (THT) parameters, if available, to configure the required settings. Because manually configured settings will be overwritten by config download runs, when default values are defined in either THT or Puppet. Furthermore, you can only manually configure settings for existing environments, therefore the modified settings will not be applied to any new or replaced nodes. Table 3.1. Recommended scale configurations for the Networking service Setting Description Manual configuration THT parameter OVS server-side inactivity probe on Compute nodes Increase this probe interval from 5 seconds to 30 seconds. OVN Northbound server-side inactivity probe on Controller nodes Increase this probe interval to 180000 ms or set it to 0 to disable it. OVN Southbound server-side inactivity probe on Controller nodes Increase this probe interval to 180000 ms or set it to 0 to disable it. OVN controller remote probe interval on Compute nodes Increase this probe interval to 180000 ms or set it to 0 to disable it. OVNRemoteProbeInterval: 180000 Networking service client-side probe interval on Controller nodes Increase this probe interval to 180000 ms or set it to 0 to disable it. OVNOvsdbProbeInterval: 180000 Networking service api_workers on Controller nodes Increase the default number of separate API worker processes from 12 to 16 or more, based on the load on the neutron-server . NeutronWorkers: 16 Networking service agent_down_time on Controller nodes Set agent_down_time to the maximum permissible number for very large clusters. NeutronAgentDownTime: 2147483 OVN metadata report_agent on Compute nodes Disable the report_agent on large installations. OVN metadata_workers on Compute nodes Reduce the metadata_workers to the minimum on all Compute nodes to reduce the connections to the OVN Southbound database. NeutronMetadataWorkers: 1 OVN metadata rpc_workers on Compute nodes Reduce the rpc_workers to the minimum on all Compute nodes. NeutronRpcWorkers: 0 OVN metadata client-side probe interval on Compute nodes Increase this probe interval to 180000 ms or set it to 0 to disable it. OVNOvsdbProbeInterval: 180000 Limit the number of nodes that are provisioned at the same time Fifty is the typical amount of servers that can fit within an average enterprise-level rack unit, therefore, you can deploy an average of one rack of nodes at one time. To minimize the debugging necessary to diagnose issues with the deployment, deploy a maximum of 50 nodes at one time. If you want to deploy a higher number of nodes, Red Hat has successfully tested up to 100 nodes simultaneously. To scale Compute nodes in batches, use the openstack overcloud deploy command with the --limit option. This can result in saved time and lower resource consumption on the undercloud. Disable unused NICs If the overcloud has any unused NICs during the deployment, you must define the unused interfaces in the NIC configuration templates and set the interfaces to use_dhcp: false and defroute: false . If you do not define unused interfaces, there might be routing issues and IP allocation problems during introspection and scaling operations. By default, the NICs set BOOTPROTO=dhcp , which means the unused overcloud NICs consume IP addresses that are needed for the PXE provisioning. This can reduce the pool of available IP addresses for your nodes. Power off unused Bare Metal Provisioning (ironic) nodes Ensure that you power off any unused Bare Metal Provisioning (ironic) nodes in maintenance mode. Bare Metal Provisioning does not track the power state of nodes in maintenance mode and incorrectly reports the power state of nodes from deployments left in maintenance mode in a powered on state as off. This can cause problems with ongoing deployments if the unused node has an operating system with stale configurations, for example, IP addresses from overcloud networks. When you redeploy after a failed deployment, ensure that you power off all unused nodes. 3.3. Tuning the undercloud Review this section when you plan to scale your Red Hat OpenStack Platform (RHOSP) deployment to configure your default undercloud settings. Tune HA Services to support larger message size A large-scale deployment requires a larger message size than the default values configured for the mariadb and rabbitmq HA services. Increase these values by using a custom environment file as well as hieradata override file before deploying the undercloud: custom_env_files.yaml hieradata_override.yaml undercloud.conf Increase the open file limit to 4096 Ensure that you increase the open file limit of your undercloud to 4096, by editing the following parameters in the /etc/security/limits.conf file: Separate the provisioning and configuration processes To create only the stack and associated RHOSP resources, you can run the deployment command with the --stack-only option. Red Hat recommends separating the stack and config-download steps when deploying more than 100 nodes. Include any environment files that are required for your overcloud: After you have provisioned the stack, you can enable SSH access for the tripleo-admin user from the undercloud to the overcloud. The config-download process uses the tripleo-admin user to perform the Ansible based configuration: To disable the overcloud stack creation and to only apply the config-download workflow to the software configuration, you can run the deployment command with the --config-download-only option. Include any environment files that are required for your overcloud: To limit the config-download playbook execution to a specific node or set of nodes, you can use the --limit option. For scale-up operations, to only apply software configuration on the new nodes, you can use the --limit option with the --config-download-only option. If you use the --limit option always include <Controller> and <Undercloud> in the list. Tasks that use the external_deploy_steps interface, for example all Ceph configurations, are executed when <Undercloud> is included in the options list. All external_deploy_steps tasks run on the undercloud. For example, if you run a scale-up task to add a Compute node that requires a connection to Ceph and you do not include <Undercloud> in the list, then this task fails because the Ceph configuration and cephx key files are not provided. Do not use the --skip-tags external_deploy_steps option or the task fails. Note Instance migrations do not work between some computes after using --limit option for scale-up operations. This is because the original and newly added computes nodes have mutually exclusive information in their respective /etc/hosts and /etc/ssh/ssh_known_hosts files.
[ "clean_nodes = true", "parameter_defaults: NovaSchedulerShuffleBestSameWeighedHosts: `True`", "parameter_defaults: ControllerExtraConfig nova::scheduler::filter::max_instances_per_host: <maximum_number_of_instances> NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - NumInstancesFilter", "ovs-vsctl set Manager . inactivity_probe=30000", "exec -u root ovn_controller ovn-nbctl --no-leader-only set Connection . inactivity_probe=180000", "exec -u root ovn_controller ovn-sbctl --no-leader-only set Connection . inactivity_probe=180000", "exec -u root ovn_controller ovs-vsctl --no-leader-only set Open_vSwitch . external_ids:ovn-remote-probe-interval=180000", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/ml2/ml2_conf.ini ovn ovsdb_probe_interval 180000", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf DEFAULT api_workers 16", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf DEFAULT agent_down_time 2147483", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron_ovn_metadata_agent.ini agent report_agent false", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron_ovn_metadata_agent.ini DEFAULT metadata_workers 1", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron_ovn_metadata_agent.ini DEFAULT rpc_workers 0", "crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron_ovn_metadata_agent.ini ovn ovsdb_probe_interval 180000", "parameter_defaults: max_message_size: 536870912 MySQLServerOptions: mysqld: max_allowed_packet: \"512M\"", "rabbitmq_config_variables: max_message_size: 536870912 cluster_partition_handling: 'ignore' queue_master_locator: '<<\"min-masters\">>'", "custom_env_files = /home/stack/custom-undercloud-params.yaml hieradata_override = /home/stack/hieradata.yaml", "* soft nofile 4096 * hard nofile 4096", "openstack overcloud deploy --templates -e <environment-file1.yaml> -e <environment-file2.yaml> --stack-only", "openstack overcloud admin authorize", "openstack overcloud deploy --templates -e <environment-file1.yaml> -e <environment-file2.yaml> --config-download-only", "openstack overcloud deploy --templates -e <environment-file1.yaml> -e <environment-file2.yaml> --config-download-only --config-download-timeout --limit <Undercloud>,<Controller>,<Compute-1>,<Compute-2>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_openstack_platform_at_scale/assembly-openstack-deployment-best-practices_recommendations-large-deployments
Chapter 7. Configuring the Guardrails Orchestrator service
Chapter 7. Configuring the Guardrails Orchestrator service Important The Guardrails Orchestrator service is currently available in Red Hat OpenShift AI 2.18 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The TrustyAI Guardrails Orchestrator service is a tool to invoke detections on text generation inputs and outputs, as well as standalone detections. It is underpinned by the open-source project FMS-Guardrails Orchestrator from IBM. You can deploy the Guardrails Orchestrator service through a Custom Resource Definition (CRD) that is managed by the TrustyAI Operator. The following sections describe how to do the following tasks: Set up the Guardrails Orchestrator service Create a custom resource (CR) Deploy a Guardrails Orchestrator instance Monitor user-inputs to your LLM using this service 7.1. Deploying the Guardrails Orchestrator service You can deploy a Guardrails Orchestrator instance in your namespace to monitor elements, such as user inputs to your Large Language Model (LLM). Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You are familiar with creating a config map for monitoring a user-defined workflow. You perform similar steps in this procedure. You have KServe set to RawDeployment . See Deploying models on single-node OpenShift using KServe Raw Deployment mode . You have the TrustyAI component in your OpenShift AI DataScienceCluster set to Managed . You have an LLM for chat generation deployed in your namespace. You have an LLM for text classification deployed in your namespace. Procedure Define a ConfigMap object in a YAML file to specify the chat_generation and detectors services. For example, create a file named orchestrator_cm.yaml with the following content: Example orchestrator_cm.yaml --- kind: ConfigMap apiVersion: v1 metadata: name: fms-orchestr8-config-nlp data: config.yaml: | chat_generation: 1 service: hostname: <CHAT_GENERATION_HOSTNAME> port: 8080 detectors: 2 <DETECTOR_NAME>: type: text_contents service: hostname: <DETECTOR_HOSTNAME> port: 8000 chunker_id: whole_doc_chunker default_threshold: 0.5 --- <1> A service for chat generation referring to a deployed LLM in your namespace where you are adding guardrails. <2> A list of services responsible for running detection of a certain class of content on text spans. Each of these services refer to a deployed LLM for text classification in your namespace. Deploy the orchestrator_cm.yaml config map: --- USD oc apply -f orchestrator_cm.yaml -n <TEST_NAMESPACE> --- Specify the previously created ConfigMap object created in the GuardrailsOrchestrator custom resource (CR). For example, create a file named orchestrator_cr.yaml with the following content: Example orchestrator_cr.yaml CR --- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-sample spec: orchestratorConfig: "fms-orchestr8-config-nlp" replicas: 1 --- Deploy the orchestrator CR, which creates a service account, deployment, service, and route object in your namespace. --- oc apply -f orchestrator_cr.yaml -n <TEST_NAMESPACE> --- Verification Confirm that the orchestrator and LLM pods are running: --- USD oc get pods -n <TEST_NAMESPACE> --- Example response --- NAME READY STATUS RESTARTS AGE gorch-test-55bf5f84d9-dd4vm 3/3 Running 0 3h53m ibm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m ibm-hap-predictor-5d54c877d5-rbdms 1/1 Running 0 3h53m llm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m llm-predictor-5d54c877d5-rbdms 1/1 Running 0 57m --- Query the /health endpoint of the orchestrator route to check the current status of the detector and generator services. If a 200 OK response is returned, the services are functioning normally: --- USD GORCH_ROUTE_HEALTH=USD(oc get routes gorch-test-health -o jsonpath='{.spec.host}') --- --- USD curl -v https://USDGORCH_ROUTE_HEALTH/health --- Example response --- * Trying ::1:8034... * connect to ::1 port 8034 failed: Connection refused * Trying 127.0.0.1:8034... * Connected to localhost (127.0.0.1) port 8034 (#0) > GET /health HTTP/1.1 > Host: localhost:8034 > User-Agent: curl/7.76.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 36 < date: Fri, 31 Jan 2025 14:04:25 GMT < * Connection #0 to host localhost left intact {"fms-guardrails-orchestr8":"0.1.0"} --- 7.2. Guardrails Orchestrator parameters A GuardrailsOrchestrator object represents an orchestration service that invokes detectors on text generation input and output and standalone detections. You can modify the following parameters for the GuardrailsOrchestrator object you created previously: Parameter Description replicas The number of orchestrator pods to activate orchestratorConfig The name of the ConfigMap object that contains generator, detector, and chunker arguments. otelExporter **(optional)** A list of paired name and value arguments for configuring OpenTelemetry traces or metrics, or both: protocol - Sets the protocol for all the OpenTelemetry protocol (OTLP) endpoints. Valid values are grpc or http tracesProtocol - Sets the protocol for traces. Acceptable values are grpc or http metricsProtocol - Sets the protocol for metrics. Acceptable values are grpc or http otlpEndpoint - Sets the OTLP endpoint. Default values are gRPC localhost:4317 and HTTP localhost:4318 metricsEndpoint - Sets the OTLP endpoint for metrics tracesEndpoint - Sets the OTLP endpoint for traces 7.3. Configuring the OpenTelemetry Exporter for metrics and tracing Enable traces and metrics that are provided for the observability of the GuardrailsOrchestrator service with the OpenTelemetry Operator. Prerequisites You have installed the Red Hat OpenShift AI distributed tracing platform from the OperatorHub and created a Jaeger instance using the default settings. You have installed the Red Hat build of OpenTelemetry from the OperatorHub and created an OpenTelemetry instance. Procedure Define a GuardrailsOrchestrator custom resource object to specify the otelExporter configurations in a YAML file named orchestrator_otel_cr.yaml : Example of a orchestrator_otel_cr.yaml object that has OpenTelemetry configured: + --- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-test spec: orchestratorConfig: "fms-orchestr8-config-nlp" 1 vllmGatewayConfig: "fms-orchestr8-config-gateway" 2 replicas: 1 otelExporter: protocol: "http" otlpEndpoint: "localhost:4318" otlpExport: "metrics" --- <1> These speficications are the same as Step 7 from "Configuring the regex detector and vLLM gateway". This example CR adds `otelExporter` configurations. Deploy the orchestrator custom resource. --- USD oc apply -f orchestrator_otel_cr.yaml ---
[ "--- kind: ConfigMap apiVersion: v1 metadata: name: fms-orchestr8-config-nlp data: config.yaml: | chat_generation: 1 service: hostname: <CHAT_GENERATION_HOSTNAME> port: 8080 detectors: 2 <DETECTOR_NAME>: type: text_contents service: hostname: <DETECTOR_HOSTNAME> port: 8000 chunker_id: whole_doc_chunker default_threshold: 0.5 --- <1> A service for chat generation referring to a deployed LLM in your namespace where you are adding guardrails. <2> A list of services responsible for running detection of a certain class of content on text spans. Each of these services refer to a deployed LLM for text classification in your namespace.", "--- oc apply -f orchestrator_cm.yaml -n <TEST_NAMESPACE> ---", "--- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-sample spec: orchestratorConfig: \"fms-orchestr8-config-nlp\" replicas: 1 ---", "--- apply -f orchestrator_cr.yaml -n <TEST_NAMESPACE> ---", "--- oc get pods -n <TEST_NAMESPACE> ---", "--- NAME READY STATUS RESTARTS AGE gorch-test-55bf5f84d9-dd4vm 3/3 Running 0 3h53m ibm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m ibm-hap-predictor-5d54c877d5-rbdms 1/1 Running 0 3h53m llm-container-deployment-bd4d9d898-52r5j 1/1 Running 0 3h53m llm-predictor-5d54c877d5-rbdms 1/1 Running 0 57m ---", "--- GORCH_ROUTE_HEALTH=USD(oc get routes gorch-test-health -o jsonpath='{.spec.host}') ---", "--- curl -v https://USDGORCH_ROUTE_HEALTH/health ---", "--- * Trying ::1:8034 * connect to ::1 port 8034 failed: Connection refused * Trying 127.0.0.1:8034 * Connected to localhost (127.0.0.1) port 8034 (#0) > GET /health HTTP/1.1 > Host: localhost:8034 > User-Agent: curl/7.76.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 36 < date: Fri, 31 Jan 2025 14:04:25 GMT < * Connection #0 to host localhost left intact {\"fms-guardrails-orchestr8\":\"0.1.0\"} ---", "--- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: gorch-test spec: orchestratorConfig: \"fms-orchestr8-config-nlp\" 1 vllmGatewayConfig: \"fms-orchestr8-config-gateway\" 2 replicas: 1 otelExporter: protocol: \"http\" otlpEndpoint: \"localhost:4318\" otlpExport: \"metrics\" --- <1> These speficications are the same as Step 7 from \"Configuring the regex detector and vLLM gateway\". This example CR adds `otelExporter` configurations.", "--- oc apply -f orchestrator_otel_cr.yaml ---" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/monitoring_data_science_models/configuring-the-guardrails-orchestrator-service_monitor
Chapter 7. Configuring the environment mode in KIE Server and Business Central
Chapter 7. Configuring the environment mode in KIE Server and Business Central You can set KIE Server to run in production mode or in development mode. Development mode provides a flexible deployment policy that enables you to update existing deployment units (KIE containers) while maintaining active process instances for small changes. It also enables you to reset the deployment unit state before updating active process instances for larger changes. Production mode is optimal for production environments, where each deployment creates a new deployment unit. In a development environment, you can click Deploy in Business Central to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option in Business Central is disabled and you can click only Deploy to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. Procedure To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option. Note By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/configuring-environment-mode-proc_execution-server
Chapter 7. OLM 1.0 (Technology Preview)
Chapter 7. OLM 1.0 (Technology Preview) 7.1. About Operator Lifecycle Manager 1.0 (Technology Preview) Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduces components for a -generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0 . This updated framework evolves many of the concepts that have been part of versions of OLM and adds new capabilities. Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.14, administrators can explore the following features: Fully declarative model that supports GitOps workflows OLM 1.0 simplifies Operator management through two key APIs: A new Operator API, provided as operator.operators.operatorframework.io by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles. The Catalog API, provided by the new catalogd component, serves as the foundation for OLM 1.0, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges. For more information, see Operator Controller and Catalogd . Improved control over Operator updates With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see Updating an Operator . Flexible Operator packaging format Administrators can use file-based catalogs to install and manage the following types of content: OLM-based Operators, similar to the existing OLM experience Plain bundles , which are static collections of arbitrary Kubernetes manifests In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing an Operator from a catalog and Managing plain bundles . 7.1.1. Purpose The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster. The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as CustomResourceDefinition (CRD) objects, to provide additional functionality to the cluster. After running in production clusters for many releases, the -generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators. 7.2. Components and architecture 7.2.1. OLM 1.0 components overview (Technology Preview) Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Operator Lifecycle Manager (OLM) 1.0 comprises the following component projects: Operator Controller Operator Controller is the central component of OLM 1.0 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components. RukPak RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy. RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions. Catalogd Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM 1.0 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content. 7.2.2. Operator Controller (Technology Preview) Operator Controller is the central component of Operator Lifecycle Manager (OLM) 1.0 and consumes the other OLM 1.0 components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions. Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.2.2.1. Operator API Operator Controller provides a new Operator API object, which is a single resource that represents an instance of an installed Operator. This operator.operators.operatorframework.io API streamlines management of installed Operators by consolidating user-facing APIs into a single object. Important In OLM 1.0, Operator objects are cluster-scoped. This differs from earlier OLM versions where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription and OperatorGroup objects. For more information about the earlier behavior, see Multitenancy and Operator colocation . Example Operator object apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> spec: packageName: <package_name> channel: <channel_name> version: <version_number> Note When using the OpenShift CLI ( oc ), the Operator resource provided with OLM 1.0 during this Technology Preview phase requires specifying the full <resource>.<group> format: operator.operators.operatorframework.io . For example: USD oc get operator.operators.operatorframework.io If you specify only the Operator resource without the API group, the CLI returns results for an earlier API ( operator.operators.coreos.com ) that is unrelated to OLM 1.0. Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation 7.2.2.1.1. Example custom resources (CRs) that specify a target version In Operator Lifecycle Manager (OLM) 1.0, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR). You can define a target version by specifying any of the following fields: Channel Version number Version range If you specify a channel in the CR, OLM 1.0 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release that can be resolved from the channel. Example CR with a specified channel apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest 1 1 Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. If you specify the Operator or extension's target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the CR, OLM 1.0 does not change the target version when updates are published to the catalog. If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator's CR. Specifying an Operator's target version pins the Operator's version to the specified release. Example CR with the target version specified apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.11.1 1 1 Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM 1.0 installs the latest version of an Operator or extension that can be resolved by the Operator Controller. Example CR with a version range specified apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: >1.11.1 1 1 Specifies that the desired version range is greater than version 1.11.1 . For more information, see "Support for version ranges". After you create or update a CR, apply the configuration file by running the following command: Command syntax USD oc apply -f <extension_name>.yaml 7.2.3. Rukpak (Technology Preview) Operator Lifecycle Manager (OLM) 1.0 uses the RukPak component and its resources to manage cloud-native content. Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.2.3.1. About RukPak RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy. RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions. At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs. Common terminology Bundle A collection of Kubernetes manifests that define content to be deployed to a cluster Bundle image A container image that contains a bundle within its filesystem Bundle Git repository A Git repository that contains a bundle within a directory Provisioner Controllers that install and manage content on a Kubernetes cluster Bundle deployment Generates deployed instances of a bundle 7.2.3.2. About provisioners RukPak consists of a series of controllers, known as provisioners , that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment . These components work together to bring content onto the cluster and install it, generating resources within the cluster. Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles. Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster. A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources. 7.2.3.3. Bundle A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content. Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type. Example Bundle object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain Note Bundles are considered immutable after they are created. 7.2.3.3.1. Bundle immutability After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object. Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status , are updated during the bundle's lifecycle; it is only the spec field that is considered immutable. Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle: USD oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF Example output bundle.core.rukpak.io/combo-tag-ref created Then, patching the bundle to point to a newer tag returns an error: USD oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}' Example output Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place. Further immutability considerations While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario: A user sets an image tag, a Git branch, or a Git tag in the spec.source field of the Bundle object. The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit. A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod. If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content. This is similar to pod behavior, where one of the pod's container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it. To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle. 7.2.3.3.2. Plain bundle spec A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory. The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0 , combines the type of bundle ( plain ) with the current schema version ( v0 ). Note The plain+v0 bundle format is at schema version v0 , which means it is an experimental format that is subject to change. For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application. Example plain+v0 bundle file tree USD tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories. Important Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well. 7.2.3.3.3. Registry bundle spec A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format. Additional resources Legacy OLM bundle format 7.2.3.4. BundleDeployment Warning A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions. The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle. Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept. The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment. Example BundleDeployment object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain 7.2.4. Dependency resolution in OLM 1.0 (Technology Preview) Operator Lifecycle Manager (OLM) 1.0 uses a dependency manager for resolving constraints over catalogs of RukPak bundles. Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.2.4.1. Concepts There are a set of expectations from the user that the package manager should never do the following: Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package Install a package whose constraints can not be met by the current set of installable packages Update a package in a way that breaks another that depends on it 7.2.4.1.1. Example: Successful resolution A user wants to install packages A and B that have the following dependencies: Package A v0.1.0 Package B latest v (depends on) v (depends on) Package C v0.1.0 Package D latest Additionally, the user wants to pin the version of A to v0.1.0 . Packages and constraints passed to OLM 1.0 Packages A B Constraints A v0.1.0 depends on C v0.1.0 A pinned to v0.1.0 B depends on D Output Resolution set: A v0.1.0 B latest C v0.1.0 D latest 7.2.4.1.2. Example: Unsuccessful resolution A user wants to install packages A and B that have the following dependencies: Package A v0.1.0 Package B latest v (depends on) v (depends on) Package C v0.1.0 Package C v0.2.0 Additionally, the user wants to pin the version of A to v0.1.0 . Packages and constraints passed to OLM 1.0 Packages A B Constraints A v0.1.0 depends on C v0.1.0 A pinned to v0.1.0 B latest depends on C v0.2.0 Output Resolution set: Unable to resolve because A v0.1.0 requires C v0.1.0 , which conflicts with B latest requiring C v0.2.0 7.2.5. Catalogd (Technology Preview) Operator Lifecycle Manager (OLM) 1.0 uses the catalogd component and its resources to manage Operator and extension catalogs. Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.2.5.1. About catalogs in OLM 1.0 You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. Important If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: If mulitple catalogs are installed on a cluster, OLM 1.0 does not include a mechanism to specify a catalog when you install an Operator or extension. Dependency resolution in Operator Lifecycle Manager (OLM) 1.0 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. Additional resources File-based catalogs 7.2.5.1.1. Red Hat-provided Operator catalogs in OLM 1.0 Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0. Important If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry". Example Red Hat Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1 1 Specify the interval for polling the remote registry for newer image digests. The default value is 24h . Valid units include seconds ( s ), minutes ( m ), and hours ( h ). To disable polling, set a zero value, such as 0s . Example Certified Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h Example Community Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h The following command adds a catalog to your cluster: Command syntax USD oc apply -f <catalog_name>.yaml 1 1 Specifies the catalog CR, such as redhat-operators.yaml . Additional resources Adding a catalog to a cluster About Red Hat-provided Operator catalogs 7.3. Installing an Operator from a catalog in OLM 1.0 (Technology Preview) Cluster administrators can add catalogs , or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog. In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs). Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.3.1. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions Note For OpenShift Container Platform 4.15, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components. The TechPreviewNoUpgrade feature set enabled on the cluster Warning Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. The OpenShift CLI ( oc ) installed on your workstation Additional resources Enabling features using feature gates 7.3.2. About catalogs in OLM 1.0 You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. Important If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: If mulitple catalogs are installed on a cluster, OLM 1.0 does not include a mechanism to specify a catalog when you install an Operator or extension. Dependency resolution in Operator Lifecycle Manager (OLM) 1.0 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. Additional resources File-based catalogs 7.3.3. Red Hat-provided Operator catalogs in OLM 1.0 Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0. Important If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry". Example Red Hat Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1 1 Specify the interval for polling the remote registry for newer image digests. The default value is 24h . Valid units include seconds ( s ), minutes ( m ), and hours ( h ). To disable polling, set a zero value, such as 0s . Example Certified Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h Example Community Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h The following command adds a catalog to your cluster: Command syntax USD oc apply -f <catalog_name>.yaml 1 1 Specifies the catalog CR, such as redhat-operators.yaml . Additional resources Creating a pull secret for catalogs hosted on a secure registry Adding a catalog to a cluster About Red Hat-provided Operator catalogs 7.3.4. Creating a pull secret for catalogs hosted on a secure registry If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. Note Currently, catalogd cannot read global pull secrets from OpenShift Container Platform clusters. Catalogd can read references to secrets only in the namespace where it is deployed. Prerequisites Login credentials for the secure registry Docker or Podman installed on your workstation Procedure If you already have a .dockercfg file with login credentials for the secure registry, create a pull secret by running the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<file_path>/.dockercfg \ --type=kubernetes.io/dockercfg \ --namespace=openshift-catalogd Example 7.1. Example command USD oc create secret generic redhat-cred \ --from-file=.dockercfg=/home/<username>/.dockercfg \ --type=kubernetes.io/dockercfg \ --namespace=openshift-catalogd If you already have a USDHOME/.docker/config.json file with login credentials for the secured registry, create a pull secret by running the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<file_path>/.docker/config.json \ --type=kubernetes.io/dockerconfigjson \ --namespace=openshift-catalogd Example 7.2. Example command USD oc create secret generic redhat-cred \ --from-file=.dockerconfigjson=/home/<username>/.docker/config.json \ --type=kubernetes.io/dockerconfigjson \ --namespace=openshift-catalogd If you do not have a Docker configuration file with login credentials for the secure registry, create a pull secret by running the following command: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<username> \ --docker-password=<password> \ --docker-email=<email> \ --namespace=openshift-catalogd Example 7.3. Example command USD oc create secret docker-registry redhat-cred \ --docker-server=registry.redhat.io \ --docker-username=username \ --docker-password=password \ [email protected] \ --namespace=openshift-catalogd 7.3.5. Adding a catalog to a cluster To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster. Prerequisites If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry". Procedure Create a catalog custom resource (CR), similar to the following example: Example redhat-operators.yaml apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3 1 Specify the catalog's image in the spec.source.image field. 2 If your catalog is hosted on a secure registry, such as registry.redhat.io , you must create a pull secret scoped to the openshift-catalog namespace. 3 Specify the interval for polling the remote registry for newer image digests. The default value is 24h . Valid units include seconds ( s ), minutes ( m ), and hours ( h ). To disable polling, set a zero value, such as 0s . Add the catalog to your cluster by running the following command: USD oc apply -f redhat-operators.yaml Example output catalog.catalogd.operatorframework.io/redhat-operators created Verification Run the following commands to verify the status of your catalog: Check if you catalog is available by running the following command: USD oc get catalog Example output NAME AGE redhat-operators 20s Check the status of your catalog by running the following command: USD oc describe catalog Example output Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: Catalog Metadata: Creation Timestamp: 2024-01-10T16:18:38Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 57057 UID: 128db204-49b3-45ee-bfea-a2e6fc8e34ea Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Type: image Status: 1 Conditions: Last Transition Time: 2024-01-10T16:18:55Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: http://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-01-10T16:18:51Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:7b536ae19b8e9f74bb521c4a61e5818e036ac1865a932f2157c6c9a766b2eea5 4 Type: image Events: <none> 1 Describes the status of the catalog. 2 Displays the reason the catalog is in the current state. 3 Displays the phase of the installation process. 4 Displays the image reference of the catalog. Additional resources Creating a pull secret for catalogs hosted on a secure registry 7.3.6. Finding Operators to install from a catalog After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install. Before you can query catalogs, you must port forward the catalog server service. Prerequisite You have added a catalog to your cluster. You have installed the jq CLI tool. Procedure Port forward the catalog server service in the openshift-catalogd namespace by running the following command: USD oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:80 Download the catalog's JSON file locally by running the following command: USD curl -L http://localhost:8080/catalogs/<catalog_name>/all.json \ -C - -o /<path>/<catalog_name>.json Example 7.4. Example command USD curl -L http://localhost:8080/catalogs/redhat-operators/all.json \ -C - -o /home/username/catalogs/rhoc.json Run one of the following commands to return a list of Operators and extensions in a catalog. Important Currently, Operator Lifecycle Manager (OLM) 1.0 supports extensions that do not use webhooks and are configured to use the AllNamespaces install mode. Extensions that use webhooks or that target a single or specified set of namespaces cannot be installed. Get a list of all the Operators and extensions from the local catalog file by running the following command: USD jq -s '.[] | select(.schema == "olm.package") | .name' \ /<path>/<filename>.json Example 7.5. Example command USD jq -s '.[] | select(.schema == "olm.package") | .name' \ /home/username/catalogs/rhoc.json Example 7.6. Example output NAME AGE "3scale-operator" "advanced-cluster-management" "amq-broker-rhel8" "amq-online" "amq-streams" "amq7-interconnect-operator" "ansible-automation-platform-operator" "ansible-cloud-addons-operator" "apicast-operator" "aws-efs-csi-driver-operator" "aws-load-balancer-operator" "bamoe-businessautomation-operator" "bamoe-kogito-operator" "bare-metal-event-relay" "businessautomation-operator" ... Get list of packages that support AllNamespaces install mode and do not use webhooks from the local catalog file by running the following command: USD jq -c 'select(.schema == "olm.bundle") | \ {"package":.package, "version":.properties[] | \ select(.type == "olm.bundle.object").value.data | @base64d | fromjson | \ select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | \ select(.type == "AllNamespaces" and .supported == true) != null) \ and .spec.webhookdefinitions == null).spec.version}' \ /<path>/<catalog_name>.json Example 7.7. Example output {"package":"3scale-operator","version":"0.10.0-mas"} {"package":"3scale-operator","version":"0.10.5"} {"package":"3scale-operator","version":"0.11.0-mas"} {"package":"3scale-operator","version":"0.11.1-mas"} {"package":"3scale-operator","version":"0.11.2-mas"} {"package":"3scale-operator","version":"0.11.3-mas"} {"package":"3scale-operator","version":"0.11.5-mas"} {"package":"3scale-operator","version":"0.11.6-mas"} {"package":"3scale-operator","version":"0.11.7-mas"} {"package":"3scale-operator","version":"0.11.8-mas"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-2"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-3"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-4"} {"package":"amq-broker-rhel8","version":"7.10.1-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.1-opr-2"} {"package":"amq-broker-rhel8","version":"7.10.2-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.2-opr-2"} ... Inspect the contents of an Operator or extension's metadata by running the following command: USD jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "<package_name>")' /<path>/<catalog_name>.json Example 7.8. Example command USD jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "openshift-pipelines-operator-rh")' \ /home/username/rhoc.json Example 7.9. Example output { "defaultChannel": "stable", "icon": { "base64data": "PHN2ZyB4bWxu..." "mediatype": "image/png" }, "name": "openshift-pipelines-operator-rh", "schema": "olm.package" } 7.3.6.1. Common catalog queries You can query catalogs by using the jq CLI tool. Table 7.1. Common package queries Query Request Available packages in a catalog USD jq -s '.[] | select( .schema == "olm.package") | \ .name' <catalog_name>.json Packages that support AllNamespaces install mode and do not use webhooks USD jq -c 'select(.schema == "olm.bundle") | \ {"package":.package, "version":.properties[] | \ select(.type == "olm.bundle.object").value.data | \ @base64d | fromjson | \ select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | \ select(.type == "AllNamespaces" and .supported == true) != null) \ and .spec.webhookdefinitions == null).spec.version}' \ <catalog_name>.json Package metadata USD jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "<package_name>")' <catalog_name>.json Catalog blobs in a package USD jq -s '.[] | select( .package == "<package_name>")' \ <catalog_name>.json Table 7.2. Common channel queries Query Request Channels in a package USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | .name' \ <catalog_name>.json Versions in a channel USD jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | \ .entries | .[] | .name' <catalog_name>.json Latest version in a channel Upgrade path USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select ( .name == "<channel>") | \ select( .package == "<package_name>")' \ <catalog_name>.json Table 7.3. Common bundle queries Query Request Bundles in a package USD jq -s '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "<package_name>") | .name' \ <catalog_name>.json Bundle dependencies Available APIs USD jq -s '.[] | select( .schema == "olm.bundle" ) | \ select ( .name == "<bundle_name>") | \ select( .package == "<package_name>")' \ <catalog_name>.json 7.3.7. Installing an Operator from a catalog Operator Lifecycle Manager (OLM) 1.0 supports installing Operators and extensions scoped to the cluster. You can install an Operator or extension from a catalog by creating a custom resource (CR) and applying it to the cluster. Important Currently, OLM 1.0 supports the installation of Operators and extensions that meet the following criteria: The Operator or extension must use the AllNamespaces install mode. The Operator or extension must not use webhooks. Operators or extensions that use webhooks or that target a single or specified set of namespaces cannot be installed. Prerequisite You have added a catalog to your cluster. You have downloaded a local copy of the catalog file. You have installed the jq CLI tool. Procedure Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps: Get a list of channels from a selected package by running the following command: USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | \ .name' /<path>/<catalog_name>.json Example 7.10. Example command USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ .name' /home/username/rhoc.json Example 7.11. Example output "latest" "pipelines-1.11" "pipelines-1.12" "pipelines-1.13" Get a list of the versions published in a channel by running the following command: USD jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | .entries | \ .[] | .name' /<path>/<catalog_name>.json Example 7.12. Example command USD jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ .entries | .[] | .name' /home/username/rhoc.json Example 7.13. Example output "openshift-pipelines-operator-rh.v1.11.1" "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.13.1" Create a CR, similar to the following example: Example pipelines-operator.yaml CR apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: <channel> version: "<version>" where: <channel> Optional: Specifies the channel, such as pipelines-1.11 or latest , for the package you want to install or update. <version> Optional: Specifies the version or version range, such as 1.11.1 , 1.12.x , or >=1.12.1 , of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". Important If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: If mulitple catalogs are installed on a cluster, OLM 1.0 does not include a mechanism to specify a catalog when you install an Operator or extension. Dependency resolution in Operator Lifecycle Manager (OLM) 1.0 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. Apply the CR to the cluster by running the following command: USD oc apply -f pipeline-operator.yaml Example output operator.operators.operatorframework.io/pipelines-operator created Verification View the Operator or extension's CR in the YAML format by running the following command: USD oc get operator.operators.operatorframework.io pipelines-operator -o yaml Note If you specify a channel or define a version range in your Operator or extension's CR, OLM 1.0 does not display the resolved version installed on the cluster. Only the version and channel information specified in the CR are displayed. If you want to find the specific version that is installed, you must compare the SHA of the image of the spec.source.image.ref field to the image reference in the catalog. Example 7.14. Example output apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"1.11.x"}} creationTimestamp: "2024-01-30T20:06:09Z" generation: 1 name: pipelines-operator resourceVersion: "44362" uid: 4272d228-22e1-419e-b9a7-986f982ee588 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.x status: conditions: - lastTransitionTime: "2024-01-30T20:06:15Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 1 reason: Success status: "True" type: Resolved - lastTransitionTime: "2024-01-30T20:06:31Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 1 reason: Success status: "True" type: Installed installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 Get information about your bundle deployment by running the following command: USD oc get bundleDeployment pipelines-operator -o yaml Example 7.15. Example output apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: creationTimestamp: "2024-01-30T20:06:15Z" generation: 2 name: pipelines-operator ownerReferences: - apiVersion: operators.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Operator name: pipelines-operator uid: 4272d228-22e1-419e-b9a7-986f982ee588 resourceVersion: "44464" uid: 0a0c3525-27e2-4c93-bf57-55920a7707c0 spec: provisionerClassName: core-rukpak-io-plain template: metadata: {} spec: provisionerClassName: core-rukpak-io-registry source: image: ref: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 type: image status: activeBundle: pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw conditions: - lastTransitionTime: "2024-01-30T20:06:15Z" message: Successfully unpacked the pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw Bundle reason: UnpackSuccessful status: "True" type: HasValidBundle - lastTransitionTime: "2024-01-30T20:06:28Z" message: Instantiated bundle pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw successfully reason: InstallationSucceeded status: "True" type: Installed - lastTransitionTime: "2024-01-30T20:06:40Z" message: BundleDeployment is healthy reason: Healthy status: "True" type: Healthy observedGeneration: 2 Additional resources Example custom resources (CRs) that specify a target version Support for version ranges 7.3.8. Updating an Operator You can update your Operator or extension by manually editing the custom resource (CR) and applying the changes. Prerequisites You have a catalog installed. You have downloaded a local copy of the catalog file. You have an Operator or extension installed. You have installed the jq CLI tool. Procedure Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps: Get a list of channels from a selected package by running the following command: USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | \ .name' /<path>/<catalog_name>.json Example 7.16. Example command USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ .name' /home/username/rhoc.json Example 7.17. Example output "latest" "pipelines-1.11" "pipelines-1.12" "pipelines-1.13" Get a list of the versions published in a channel by running the following command: USD jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | .entries | \ .[] | .name' /<path>/<catalog_name>.json Example 7.18. Example command USD jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ .entries | .[] | .name' /home/username/rhoc.json Example 7.19. Example output "openshift-pipelines-operator-rh.v1.11.1" "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.13.1" Find out what version or channel is specified in your Operator or extension's CR by running the following command: USD oc get operator.operators.operatorframework.io <operator_name> -o yaml Example command USD oc get operator.operators.operatorframework.io pipelines-operator -o yaml Example 7.20. Example output apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"1.11.1"}} creationTimestamp: "2024-02-06T17:47:15Z" generation: 2 name: pipelines-operator resourceVersion: "84528" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest 1 packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.1 2 status: conditions: - lastTransitionTime: "2024-02-06T17:47:21Z" message: bundledeployment status is unknown observedGeneration: 2 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: "2024-02-06T17:50:58Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 2 reason: Success status: "True" type: Resolved resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 1 Specifies the channel for your Operator or extension. 2 Specifies the version or version range for your Operator or extension. Note If you specify a channel or define a version range in your Operator or extension's CR, OLM 1.0 does not display the resolved version installed on the cluster. Only the version and channel information specified in the CR are displayed. If you want to find the specific version that is installed, you must compare the SHA of the image of the spec.source.image.ref field to the image reference in the catalog. Edit your CR by using one of the following methods: If you want to pin your Operator or extension to specific version, such as 1.12.1 , edit your CR similar to the following example: Example pipelines-operator.yaml CR apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.12.1 1 1 Update the version from 1.11.1 to 1.12.1 If you want to define a range of acceptable update versions, edit your CR similar to the following example: Example CR with a version range specified apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: ">1.11.1, <1.13" 1 1 Specifies that the desired version range is greater than version 1.11.1 and less than 1.13 . For more information, see "Support for version ranges" and "Version comparison strings". If you want to update to the latest version that can be resolved from a channel, edit your CR similar to the following example: Example CR with a specified channel apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: pipelines-1.13 1 1 Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. If you want to specify a channel and version or version range, edit your CR similar to the following example: Example CR with a specified channel and version range apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest version: "<1.13" For more information, see "Example custom resources (CRs) that specify a target version". Apply the update to the cluster by running the following command: USD oc apply -f pipelines-operator.yaml Example output operator.operators.operatorframework.io/pipelines-operator configured Tip You can patch and apply the changes to your CR from the CLI by running the following command: USD oc patch operator.operators.operatorframework.io/pipelines-operator -p \ '{"spec":{"version":"1.12.1"}}' \ --type=merge Example output operator.operators.operatorframework.io/pipelines-operator patched Verification Verify that the channel and version updates have been applied by running the following command: USD oc get operator.operators.operatorframework.io pipelines-operator -o yaml Example 7.21. Example output apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"1.12.1"}} creationTimestamp: "2024-02-06T19:16:12Z" generation: 4 name: pipelines-operator resourceVersion: "58122" uid: 886bbf73-604f-4484-9f87-af6ce0f86914 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.12.1 1 status: conditions: - lastTransitionTime: "2024-02-06T19:30:57Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a" observedGeneration: 3 reason: Success status: "True" type: Installed - lastTransitionTime: "2024-02-06T19:30:57Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a" observedGeneration: 3 reason: Success status: "True" type: Resolved installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a 1 Verify that the version is updated to 1.12.1 . Troubleshooting If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator or extension: USD oc get operator.operators.operatorframework.io <operator_name> -o yaml Example 7.22. Example output oc get operator.operators.operatorframework.io pipelines-operator -o yaml apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"2.0.0"}} creationTimestamp: "2024-02-06T17:47:15Z" generation: 1 name: pipelines-operator resourceVersion: "82667" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 2.0.0 status: conditions: - lastTransitionTime: "2024-02-06T17:47:21Z" message: installation has not been attempted due to failure to gather data for resolution observedGeneration: 1 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: "2024-02-06T17:47:21Z" message: no package "openshift-pipelines-operator-rh" matching version "2.0.0" found in channel "latest" observedGeneration: 1 reason: ResolutionFailed status: "False" type: Resolved Additional resources Example custom resources (CRs) that specify a target version Version comparison strings 7.3.8.1. Support for semantic versioning Support for semantic versioning (semver) is enabled in OLM 1.0 by default. Operator and extension authors can use the semver standard to define compatible updates. Operator Lifecycle Manager (OLM) 1.0 can use an Operator or extension's version number to determine if an update can be resolved successfully. Cluster administrators can define a range of acceptable versions to install and automtically update. For Operators and extensions that follow the semver standard, you can use comparison strings to define to specify a desired version range. Note OLM 1.0 does not support automatic updates to the major version. If you want to perform a major version update, you must verify and apply the update manually. For more information, see "Forcing an update or rollback". 7.3.8.1.1. Major version zero releases The semver standard specifies that major version zero releases ( O.y.z ) are reserved for initial development. During the initial development stage, the API is not stable and breaking changes might be introduced in any published version. As a result, major version zero releases apply a special set of update conditions. Update conditions for major version zero releases You cannot apply automatic updates when the major and minor versions are both zero, such as 0.0.* . For example, automatic updates with the version range of >=0.0.1 <0.1.0 are not allowed. You cannot apply automatic updates from one minor version to another within a major version zero release. For example, OLM 1.0 does not automatically apply an update from 0.1.0 to 0.2.0 . You can apply automatic updates from patch versions, such as >=0.1.0 <0.2.0 or >=0.2.0 <0.3.0 . When an automatic update is blocked by OLM 1.0, you must manually verify and force the update by editing the Operator or extension's custom resource (CR). Additional resources Forcing an update or rollback 7.3.8.2. Support for version ranges In Operator Lifecycle Manager (OLM) 1.0, you can specify a version range by using a comparison string in an Operator or extension's custom resource (CR). If you specify a version range in the CR, OLM 1.0 installs or updates to the latest version of the Operator that can be resolved within the version range. Resolved version workflow The resolved version is the latest version of the Operator that satisfies the dependencies and constraints of the Operator and the environment. An Operator update within the specified range is automatically installed if it is resolved successfully. An update is not installed if it is outside of the specified range or if it cannot be resolved successfully. For more information about dependency and constraint resolution in OLM 1.0, see "Dependency resolution in OLM 1.0". Additional resources Dependency resolution in OLM 1.0 7.3.8.3. Version comparison strings You can define a version range by adding a comparison string to the spec.version field in an Operator or extension's custom resource (CR). A comparison string is a list of space- or comma-separated values and one or more comparison operators enclosed in double quotation marks ( " ). You can add another comparison string by including an OR , or double vertical bar ( || ), comparison operator between the strings. Table 7.4. Basic comparisons Comparison operator Definition = Equal to != Not equal to > Greater than < Less than >= Greater than or equal to <= Less than or equal to You can specify a version range in an Operator or extension's CR by using a range comparison similar to the following example: Example version range comparison apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: ">=1.11, <1.13" You can use wildcard characters in all types of comparison strings. OLM 1.0 accepts x , X , and asterisks ( * ) as wildcard characters. When you use a wildcard character with the equal sign ( = ) comparison operator, you define a comparison at the patch or minor version level. Table 7.5. Example wildcard characters in comparison strings Wildcard comparison Matching string 1.11.x >=1.11.0, <1.12.0 >=1.12.X >=1.12.0 <=2.x <3 * >=0.0.0 You can make patch release comparisons by using the tilde ( ~ ) comparison operator. Patch release comparisons specify a minor version up to the major version. Table 7.6. Example patch release comparisons Patch release comparison Matching string ~1.11.0 >=1.11.0, <1.12.0 ~1 >=1, <2 ~1.12 >=1.12, <1.13 ~1.12.x >=1.12.0, <1.13.0 ~1.x >=1, <2 You can use the caret ( ^ ) comparison operator to make a comparison for a major release. If you use a major release comparison before the first stable release is published, the minor versions define the API's level of stability. In the semantic versioning (SemVer) specification, the first stable release is published as the 1.0.0 version. Table 7.7. Example major release comparisons Major release comparison Matching string ^0 >=0.0.0, <1.0.0 ^0.0 >=0.0.0, <0.1.0 ^0.0.3 >=0.0.3, <0.0.4 ^0.2 >=0.2.0, <0.3.0 ^0.2.3 >=0.2.3, <0.3.0 ^1.2.x >= 1.2.0, < 2.0.0 ^1.2.3 >= 1.2.3, < 2.0.0 ^2.x >= 2.0.0, < 3 ^2.3 >= 2.3, < 3 7.3.8.4. Example custom resources (CRs) that specify a target version In Operator Lifecycle Manager (OLM) 1.0, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR). You can define a target version by specifying any of the following fields: Channel Version number Version range If you specify a channel in the CR, OLM 1.0 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release that can be resolved from the channel. Example CR with a specified channel apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest 1 1 Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. If you specify the Operator or extension's target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the CR, OLM 1.0 does not change the target version when updates are published to the catalog. If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator's CR. Specifying an Operator's target version pins the Operator's version to the specified release. Example CR with the target version specified apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.11.1 1 1 Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM 1.0 installs the latest version of an Operator or extension that can be resolved by the Operator Controller. Example CR with a version range specified apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: >1.11.1 1 1 Specifies that the desired version range is greater than version 1.11.1 . For more information, see "Support for version ranges". After you create or update a CR, apply the configuration file by running the following command: Command syntax USD oc apply -f <extension_name>.yaml 7.3.8.5. Forcing an update or rollback OLM 1.0 does not support automatic updates to the major version or rollbacks to an earlier version. If you want to perform a major version update or rollback, you must verify and force the update manually. Warning You must verify the consequences of forcing a manual update or rollback. Failure to verify a forced update or rollback might have catastrophic consequences such as data loss. Prerequisites You have a catalog installed. You have an Operator or extension installed. Procedure Edit the custom resource (CR) of your Operator or extension as shown in the following example: Example CR apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 version: <version> 3 upgradeConstraintPolicy: Ignore 4 1 Specifies the name of the Operator or extension, such as pipelines-operator 2 Specifies the package name, such as openshift-pipelines-operator-rh . 3 Specifies the blocked update or rollback version. 4 Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to Ignore . If unspecified, the default setting is Enforce . Apply the changes to your Operator or extensions CR by running the following command: USD oc apply -f <extension_name>.yaml Additional resources Support for version ranges 7.3.9. Deleting an Operator You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator's custom resource (CR). Prerequisites You have a catalog installed. You have an Operator installed. Procedure Delete an Operator and its CRDs by running the following command: USD oc delete operator.operators.operatorframework.io <operator_name> Example output operator.operators.operatorframework.io "<operator_name>" deleted Verification Run the following commands to verify that your Operator and its resources were deleted: Verify the Operator is deleted by running the following command: USD oc get operator.operators.operatorframework.io Example output No resources found Verify that the Operator's system namespace is deleted by running the following command: USD oc get ns <operator_name>-system Example output Error from server (NotFound): namespaces "<operator_name>-system" not found 7.3.10. Deleting a catalog You can delete a catalog by deleting its custom resource (CR). Prerequisites You have a catalog installed. Procedure Delete a catalog by running the following command: USD oc delete catalog <catalog_name> Example output catalog.catalogd.operatorframework.io "my-catalog" deleted Verification Verify the catalog is deleted by running the following command: USD oc get catalog 7.4. Managing plain bundles in OLM 1.0 (Technology Preview) In Operator Lifecycle Manager (OLM) 1.0, a plain bundle is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental olm.bundle.mediatype property of the olm.bundle schema object differentiates a plain bundle ( plain+v0 ) from a regular ( registry+v1 ) bundle. Important OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures: Build a plain bundle image. Create a file-based catalog. Add the plain bundle image to your file-based catalog. Build your catalog as an image. Publish your catalog image. Additional resources RukPak component and packaging format 7.4.1. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions Note For OpenShift Container Platform 4.15, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components. The TechPreviewNoUpgrade feature set enabled on the cluster Warning Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. The OpenShift CLI ( oc ) installed on your workstation The opm CLI installed on your workstation Docker or Podman installed on your workstation Push access to a container registry, such as Quay Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure: Example directory structure manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml Additional resources Enabling features using feature gates 7.4.2. Building a plain bundle image from an image source The Operator Controller currently supports installing plain bundles created only from a plain bundle image . Procedure At the root of your project, create a Dockerfile that can build a bundle image: Example plainbundle.Dockerfile FROM scratch 1 ADD manifests /manifests 1 Use the FROM scratch directive to make the size of the image smaller. No other files or directories are required in the bundle image. Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example: USD podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> . 1 1 Use an image tag that references a repository where you have push access privileges. Push the image to your remote registry by running the following command: USD podman push quay.io/<organization_name>/<repository_name>:<image_tag> 7.4.3. Creating a file-based catalog If you do not have a file-based catalog, you must perform the following steps to initialize the catalog. Procedure Create a directory for the catalog by running the following command: USD mkdir <catalog_dir> Generate a Dockerfile that can build a catalog image by running the opm generate dockerfile command in the same directory level as the step: USD opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15 1 1 Specify the official Red Hat base image by using the -i flag, otherwise the Dockerfile uses the default upstream image. Note The generated Dockerfile must be in the same parent directory as the catalog directory that you created in the step: Example directory structure . ├── <catalog_dir> └── <catalog_dir>.Dockerfile Populate the catalog with the package definition for your extension by running the opm init command: USD opm init <extension_name> \ --output json \ > <catalog_dir>/index.json This command generates an olm.package declarative config blob in the specified catalog configuration file. 7.4.4. Adding a plain bundle to a file-based catalog The opm render command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure. Procedure Verify that the index.json or index.yaml file for your catalog is similar to the following example: Example <catalog_dir>/index.json file { { "schema": "olm.package", "name": "<extension_name>", "defaultChannel": "" } } To create an olm.bundle blob, edit your index.json or index.yaml file, similar to the following example: Example <catalog_dir>/index.json file with olm.bundle blob { "schema": "olm.bundle", "name": "<extension_name>.v<version>", "package": "<extension_name>", "image": "quay.io/<organization_name>/<repository_name>:<image_tag>", "properties": [ { "type": "olm.package", "value": { "packageName": "<extension_name>", "version": "<bundle_version>" } }, { "type": "olm.bundle.mediatype", "value": "plain+v0" } ] } To create an olm.channel blob, edit your index.json or index.yaml file, similar to the following example: Example <catalog_dir>/index.json file with olm.channel blob { "schema": "olm.channel", "name": "<desired_channel_name>", "package": "<extension_name>", "entries": [ { "name": "<extension_name>.v<version>" } ] } Verification Open your index.json or index.yaml file and ensure it is similar to the following example: Example <catalog_dir>/index.json file { "schema": "olm.package", "name": "example-extension", "defaultChannel": "preview" } { "schema": "olm.bundle", "name": "example-extension.v0.0.1", "package": "example-extension", "image": "quay.io/example-org/example-extension-bundle:v0.0.1", "properties": [ { "type": "olm.package", "value": { "packageName": "example-extension", "version": "0.0.1" } }, { "type": "olm.bundle.mediatype", "value": "plain+v0" } ] } { "schema": "olm.channel", "name": "preview", "package": "example-extension", "entries": [ { "name": "example-extension.v0.0.1" } ] } Validate your catalog by running the following command: USD opm validate <catalog_dir> 7.4.5. Building and publishing a file-based catalog Procedure Build your file-based catalog as an image by running the following command: USD podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> . Push your catalog image by running the following command: USD podman push quay.io/<organization_name>/<repository_name>:<image_tag>
[ "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> spec: packageName: <package_name> channel: <channel_name> version: <version_number>", "oc get operator.operators.operatorframework.io", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.11.1 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: >1.11.1 1", "oc apply -f <extension_name>.yaml", "apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF", "bundle.core.rukpak.io/combo-tag-ref created", "oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'", "Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable", "tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml", "apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h", "oc apply -f <catalog_name>.yaml 1", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h", "oc apply -f <catalog_name>.yaml 1", "oc create secret generic <pull_secret_name> --from-file=.dockercfg=<file_path>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd", "oc create secret generic redhat-cred --from-file=.dockercfg=/home/<username>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<file_path>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd", "oc create secret generic redhat-cred --from-file=.dockerconfigjson=/home/<username>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<username> --docker-password=<password> --docker-email=<email> --namespace=openshift-catalogd", "oc create secret docker-registry redhat-cred --docker-server=registry.redhat.io --docker-username=username --docker-password=password [email protected] --namespace=openshift-catalogd", "apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3", "oc apply -f redhat-operators.yaml", "catalog.catalogd.operatorframework.io/redhat-operators created", "oc get catalog", "NAME AGE redhat-operators 20s", "oc describe catalog", "Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: Catalog Metadata: Creation Timestamp: 2024-01-10T16:18:38Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 57057 UID: 128db204-49b3-45ee-bfea-a2e6fc8e34ea Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Type: image Status: 1 Conditions: Last Transition Time: 2024-01-10T16:18:55Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: http://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-01-10T16:18:51Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:7b536ae19b8e9f74bb521c4a61e5818e036ac1865a932f2157c6c9a766b2eea5 4 Type: image Events: <none>", "oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:80", "curl -L http://localhost:8080/catalogs/<catalog_name>/all.json -C - -o /<path>/<catalog_name>.json", "curl -L http://localhost:8080/catalogs/redhat-operators/all.json -C - -o /home/username/catalogs/rhoc.json", "jq -s '.[] | select(.schema == \"olm.package\") | .name' /<path>/<filename>.json", "jq -s '.[] | select(.schema == \"olm.package\") | .name' /home/username/catalogs/rhoc.json", "NAME AGE \"3scale-operator\" \"advanced-cluster-management\" \"amq-broker-rhel8\" \"amq-online\" \"amq-streams\" \"amq7-interconnect-operator\" \"ansible-automation-platform-operator\" \"ansible-cloud-addons-operator\" \"apicast-operator\" \"aws-efs-csi-driver-operator\" \"aws-load-balancer-operator\" \"bamoe-businessautomation-operator\" \"bamoe-kogito-operator\" \"bare-metal-event-relay\" \"businessautomation-operator\"", "jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' /<path>/<catalog_name>.json", "{\"package\":\"3scale-operator\",\"version\":\"0.10.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.10.5\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.1-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.2-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.3-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.5-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.6-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.7-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.8-mas\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-3\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-4\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-2\"}", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"openshift-pipelines-operator-rh\")' /home/username/rhoc.json", "{ \"defaultChannel\": \"stable\", \"icon\": { \"base64data\": \"PHN2ZyB4bWxu...\" \"mediatype\": \"image/png\" }, \"name\": \"openshift-pipelines-operator-rh\", \"schema\": \"olm.package\" }", "jq -s '.[] | select( .schema == \"olm.package\") | .name' <catalog_name>.json", "jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select ( .name == \"<channel>\") | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.bundle\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.bundle\" ) | select ( .name == \"<bundle_name>\") | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json", "\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\"", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json", "\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\"", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: <channel> version: \"<version>\"", "oc apply -f pipeline-operator.yaml", "operator.operators.operatorframework.io/pipelines-operator created", "oc get operator.operators.operatorframework.io pipelines-operator -o yaml", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"1.11.x\"}} creationTimestamp: \"2024-01-30T20:06:09Z\" generation: 1 name: pipelines-operator resourceVersion: \"44362\" uid: 4272d228-22e1-419e-b9a7-986f982ee588 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.x status: conditions: - lastTransitionTime: \"2024-01-30T20:06:15Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-01-30T20:06:31Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Installed installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280", "oc get bundleDeployment pipelines-operator -o yaml", "apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: creationTimestamp: \"2024-01-30T20:06:15Z\" generation: 2 name: pipelines-operator ownerReferences: - apiVersion: operators.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Operator name: pipelines-operator uid: 4272d228-22e1-419e-b9a7-986f982ee588 resourceVersion: \"44464\" uid: 0a0c3525-27e2-4c93-bf57-55920a7707c0 spec: provisionerClassName: core-rukpak-io-plain template: metadata: {} spec: provisionerClassName: core-rukpak-io-registry source: image: ref: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 type: image status: activeBundle: pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw conditions: - lastTransitionTime: \"2024-01-30T20:06:15Z\" message: Successfully unpacked the pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw Bundle reason: UnpackSuccessful status: \"True\" type: HasValidBundle - lastTransitionTime: \"2024-01-30T20:06:28Z\" message: Instantiated bundle pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw successfully reason: InstallationSucceeded status: \"True\" type: Installed - lastTransitionTime: \"2024-01-30T20:06:40Z\" message: BundleDeployment is healthy reason: Healthy status: \"True\" type: Healthy observedGeneration: 2", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json", "\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\"", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json", "\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\"", "oc get operator.operators.operatorframework.io <operator_name> -o yaml", "oc get operator.operators.operatorframework.io pipelines-operator -o yaml", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"1.11.1\"}} creationTimestamp: \"2024-02-06T17:47:15Z\" generation: 2 name: pipelines-operator resourceVersion: \"84528\" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest 1 packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.1 2 status: conditions: - lastTransitionTime: \"2024-02-06T17:47:21Z\" message: bundledeployment status is unknown observedGeneration: 2 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-02-06T17:50:58Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 2 reason: Success status: \"True\" type: Resolved resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.12.1 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: \">1.11.1, <1.13\" 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: pipelines-1.13 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest version: \"<1.13\"", "oc apply -f pipelines-operator.yaml", "operator.operators.operatorframework.io/pipelines-operator configured", "oc patch operator.operators.operatorframework.io/pipelines-operator -p '{\"spec\":{\"version\":\"1.12.1\"}}' --type=merge", "operator.operators.operatorframework.io/pipelines-operator patched", "oc get operator.operators.operatorframework.io pipelines-operator -o yaml", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"1.12.1\"}} creationTimestamp: \"2024-02-06T19:16:12Z\" generation: 4 name: pipelines-operator resourceVersion: \"58122\" uid: 886bbf73-604f-4484-9f87-af6ce0f86914 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.12.1 1 status: conditions: - lastTransitionTime: \"2024-02-06T19:30:57Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a\" observedGeneration: 3 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-02-06T19:30:57Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a\" observedGeneration: 3 reason: Success status: \"True\" type: Resolved installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a", "oc get operator.operators.operatorframework.io <operator_name> -o yaml", "get operator.operators.operatorframework.io pipelines-operator -o yaml apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"2.0.0\"}} creationTimestamp: \"2024-02-06T17:47:15Z\" generation: 1 name: pipelines-operator resourceVersion: \"82667\" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 2.0.0 status: conditions: - lastTransitionTime: \"2024-02-06T17:47:21Z\" message: installation has not been attempted due to failure to gather data for resolution observedGeneration: 1 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-02-06T17:47:21Z\" message: no package \"openshift-pipelines-operator-rh\" matching version \"2.0.0\" found in channel \"latest\" observedGeneration: 1 reason: ResolutionFailed status: \"False\" type: Resolved", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: \">=1.11, <1.13\"", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.11.1 1", "apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: >1.11.1 1", "oc apply -f <extension_name>.yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 version: <version> 3 upgradeConstraintPolicy: Ignore 4", "oc apply -f <extension_name>.yaml", "oc delete operator.operators.operatorframework.io <operator_name>", "operator.operators.operatorframework.io \"<operator_name>\" deleted", "oc get operator.operators.operatorframework.io", "No resources found", "oc get ns <operator_name>-system", "Error from server (NotFound): namespaces \"<operator_name>-system\" not found", "oc delete catalog <catalog_name>", "catalog.catalogd.operatorframework.io \"my-catalog\" deleted", "oc get catalog", "manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml", "FROM scratch 1 ADD manifests /manifests", "podman build -f plainbundle.Dockerfile -t quay.io/<organization_name>/<repository_name>:<image_tag> . 1", "podman push quay.io/<organization_name>/<repository_name>:<image_tag>", "mkdir <catalog_dir>", "opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15 1", ". ├── <catalog_dir> └── <catalog_dir>.Dockerfile", "opm init <extension_name> --output json > <catalog_dir>/index.json", "{ { \"schema\": \"olm.package\", \"name\": \"<extension_name>\", \"defaultChannel\": \"\" } }", "{ \"schema\": \"olm.bundle\", \"name\": \"<extension_name>.v<version>\", \"package\": \"<extension_name>\", \"image\": \"quay.io/<organization_name>/<repository_name>:<image_tag>\", \"properties\": [ { \"type\": \"olm.package\", \"value\": { \"packageName\": \"<extension_name>\", \"version\": \"<bundle_version>\" } }, { \"type\": \"olm.bundle.mediatype\", \"value\": \"plain+v0\" } ] }", "{ \"schema\": \"olm.channel\", \"name\": \"<desired_channel_name>\", \"package\": \"<extension_name>\", \"entries\": [ { \"name\": \"<extension_name>.v<version>\" } ] }", "{ \"schema\": \"olm.package\", \"name\": \"example-extension\", \"defaultChannel\": \"preview\" } { \"schema\": \"olm.bundle\", \"name\": \"example-extension.v0.0.1\", \"package\": \"example-extension\", \"image\": \"quay.io/example-org/example-extension-bundle:v0.0.1\", \"properties\": [ { \"type\": \"olm.package\", \"value\": { \"packageName\": \"example-extension\", \"version\": \"0.0.1\" } }, { \"type\": \"olm.bundle.mediatype\", \"value\": \"plain+v0\" } ] } { \"schema\": \"olm.channel\", \"name\": \"preview\", \"package\": \"example-extension\", \"entries\": [ { \"name\": \"example-extension.v0.0.1\" } ] }", "opm validate <catalog_dir>", "podman build -f <catalog_dir>.Dockerfile -t quay.io/<organization_name>/<repository_name>:<image_tag> .", "podman push quay.io/<organization_name>/<repository_name>:<image_tag>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operators/olm-1-0-technology-preview
Chapter 17. PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1]
Chapter 17. PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1] Description PodNetworkConnectivityCheck Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the source and target of the connectivity check status object Status contains the observed status of the connectivity check 17.1.1. .spec Description Spec defines the source and target of the connectivity check Type object Required sourcePod targetEndpoint Property Type Description sourcePod string SourcePod names the pod from which the condition will be checked targetEndpoint string EndpointAddress to check. A TCP address of the form host:port. Note that if host is a DNS name, then the check would fail if the DNS name cannot be resolved. Specify an IP address for host to bypass DNS name lookup. tlsClientCert object TLSClientCert, if specified, references a kubernetes.io/tls type secret with 'tls.crt' and 'tls.key' entries containing an optional TLS client certificate and key to be used when checking endpoints that require a client certificate in order to gracefully preform the scan without causing excessive logging in the endpoint process. The secret must exist in the same namespace as this resource. 17.1.2. .spec.tlsClientCert Description TLSClientCert, if specified, references a kubernetes.io/tls type secret with 'tls.crt' and 'tls.key' entries containing an optional TLS client certificate and key to be used when checking endpoints that require a client certificate in order to gracefully preform the scan without causing excessive logging in the endpoint process. The secret must exist in the same namespace as this resource. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 17.1.3. .status Description Status contains the observed status of the connectivity check Type object Property Type Description conditions array Conditions summarize the status of the check conditions[] object PodNetworkConnectivityCheckCondition represents the overall status of the pod network connectivity. failures array Failures contains logs of unsuccessful check actions failures[] object LogEntry records events outages array Outages contains logs of time periods of outages outages[] object OutageEntry records time period of an outage successes array Successes contains logs successful check actions successes[] object LogEntry records events 17.1.4. .status.conditions Description Conditions summarize the status of the check Type array 17.1.5. .status.conditions[] Description PodNetworkConnectivityCheckCondition represents the overall status of the pod network connectivity. Type object Required status type Property Type Description lastTransitionTime `` Last time the condition transitioned from one status to another. message string Message indicating details about last transition in a human readable format. reason string Reason for the condition's last status transition in a machine readable format. status string Status of the condition type string Type of the condition 17.1.6. .status.failures Description Failures contains logs of unsuccessful check actions Type array 17.1.7. .status.failures[] Description LogEntry records events Type object Required success Property Type Description latency `` Latency records how long the action mentioned in the entry took. message string Message explaining status in a human readable format. reason string Reason for status in a machine readable format. success boolean Success indicates if the log entry indicates a success or failure. time `` Start time of check action. 17.1.8. .status.outages Description Outages contains logs of time periods of outages Type array 17.1.9. .status.outages[] Description OutageEntry records time period of an outage Type object Property Type Description end `` End of outage detected endLogs array EndLogs contains log entries related to the end of this outage. Should contain the success entry that resolved the outage and possibly a few of the failure log entries that preceded it. endLogs[] object LogEntry records events message string Message summarizes outage details in a human readable format. start `` Start of outage detected startLogs array StartLogs contains log entries related to the start of this outage. Should contain the original failure, any entries where the failure mode changed. startLogs[] object LogEntry records events 17.1.10. .status.outages[].endLogs Description EndLogs contains log entries related to the end of this outage. Should contain the success entry that resolved the outage and possibly a few of the failure log entries that preceded it. Type array 17.1.11. .status.outages[].endLogs[] Description LogEntry records events Type object Required success Property Type Description latency `` Latency records how long the action mentioned in the entry took. message string Message explaining status in a human readable format. reason string Reason for status in a machine readable format. success boolean Success indicates if the log entry indicates a success or failure. time `` Start time of check action. 17.1.12. .status.outages[].startLogs Description StartLogs contains log entries related to the start of this outage. Should contain the original failure, any entries where the failure mode changed. Type array 17.1.13. .status.outages[].startLogs[] Description LogEntry records events Type object Required success Property Type Description latency `` Latency records how long the action mentioned in the entry took. message string Message explaining status in a human readable format. reason string Reason for status in a machine readable format. success boolean Success indicates if the log entry indicates a success or failure. time `` Start time of check action. 17.1.14. .status.successes Description Successes contains logs successful check actions Type array 17.1.15. .status.successes[] Description LogEntry records events Type object Required success Property Type Description latency `` Latency records how long the action mentioned in the entry took. message string Message explaining status in a human readable format. reason string Reason for status in a machine readable format. success boolean Success indicates if the log entry indicates a success or failure. time `` Start time of check action. 17.2. API endpoints The following API endpoints are available: /apis/controlplane.operator.openshift.io/v1alpha1/podnetworkconnectivitychecks GET : list objects of kind PodNetworkConnectivityCheck /apis/controlplane.operator.openshift.io/v1alpha1/namespaces/{namespace}/podnetworkconnectivitychecks DELETE : delete collection of PodNetworkConnectivityCheck GET : list objects of kind PodNetworkConnectivityCheck POST : create a PodNetworkConnectivityCheck /apis/controlplane.operator.openshift.io/v1alpha1/namespaces/{namespace}/podnetworkconnectivitychecks/{name} DELETE : delete a PodNetworkConnectivityCheck GET : read the specified PodNetworkConnectivityCheck PATCH : partially update the specified PodNetworkConnectivityCheck PUT : replace the specified PodNetworkConnectivityCheck /apis/controlplane.operator.openshift.io/v1alpha1/namespaces/{namespace}/podnetworkconnectivitychecks/{name}/status GET : read status of the specified PodNetworkConnectivityCheck PATCH : partially update status of the specified PodNetworkConnectivityCheck PUT : replace status of the specified PodNetworkConnectivityCheck 17.2.1. /apis/controlplane.operator.openshift.io/v1alpha1/podnetworkconnectivitychecks HTTP method GET Description list objects of kind PodNetworkConnectivityCheck Table 17.1. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheckList schema 401 - Unauthorized Empty 17.2.2. /apis/controlplane.operator.openshift.io/v1alpha1/namespaces/{namespace}/podnetworkconnectivitychecks HTTP method DELETE Description delete collection of PodNetworkConnectivityCheck Table 17.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PodNetworkConnectivityCheck Table 17.3. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheckList schema 401 - Unauthorized Empty HTTP method POST Description create a PodNetworkConnectivityCheck Table 17.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.5. Body parameters Parameter Type Description body PodNetworkConnectivityCheck schema Table 17.6. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 201 - Created PodNetworkConnectivityCheck schema 202 - Accepted PodNetworkConnectivityCheck schema 401 - Unauthorized Empty 17.2.3. /apis/controlplane.operator.openshift.io/v1alpha1/namespaces/{namespace}/podnetworkconnectivitychecks/{name} Table 17.7. Global path parameters Parameter Type Description name string name of the PodNetworkConnectivityCheck HTTP method DELETE Description delete a PodNetworkConnectivityCheck Table 17.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodNetworkConnectivityCheck Table 17.10. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodNetworkConnectivityCheck Table 17.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.12. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodNetworkConnectivityCheck Table 17.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.14. Body parameters Parameter Type Description body PodNetworkConnectivityCheck schema Table 17.15. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 201 - Created PodNetworkConnectivityCheck schema 401 - Unauthorized Empty 17.2.4. /apis/controlplane.operator.openshift.io/v1alpha1/namespaces/{namespace}/podnetworkconnectivitychecks/{name}/status Table 17.16. Global path parameters Parameter Type Description name string name of the PodNetworkConnectivityCheck HTTP method GET Description read status of the specified PodNetworkConnectivityCheck Table 17.17. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PodNetworkConnectivityCheck Table 17.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.19. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PodNetworkConnectivityCheck Table 17.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.21. Body parameters Parameter Type Description body PodNetworkConnectivityCheck schema Table 17.22. HTTP responses HTTP code Reponse body 200 - OK PodNetworkConnectivityCheck schema 201 - Created PodNetworkConnectivityCheck schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/podnetworkconnectivitycheck-controlplane-operator-openshift-io-v1alpha1
Authentication and authorization
Authentication and authorization OpenShift Container Platform 4.14 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/index
Chapter 36. Type Converters
Chapter 36. Type Converters Abstract Apache Camel has a built-in type conversion mechanism, which is used to convert message bodies and message headers to different types. This chapter explains how to extend the type conversion mechanism by adding your own custom converter methods. 36.1. Type Converter Architecture Overview This section describes the overall architecture of the type converter mechanism, which you must understand, if you want to write custom type converters. If you only need to use the built-in type converters, see Chapter 34, Understanding Message Formats . Type converter interface Example 36.1, "TypeConverter Interface" shows the definition of the org.apache.camel.TypeConverter interface, which all type converters must implement. Example 36.1. TypeConverter Interface Controller type converter The Apache Camel type converter mechanism follows a controller/worker pattern. There are many worker type converters, which are each capable of performing a limited number of type conversions, and a single controller type converter, which aggregates the type conversions performed by the workers. The controller type converter acts as a front-end for the worker type converters. When you request the controller to perform a type conversion, it selects the appropriate worker and delegates the conversion task to that worker. For users of the type conversion mechanism, the controller type converter is the most important because it provides the entry point for accessing the conversion mechanism. During start up, Apache Camel automatically associates a controller type converter instance with the CamelContext object. To obtain a reference to the controller type converter, you call the CamelContext.getTypeConverter() method. For example, if you have an exchange object, exchange , you can obtain a reference to the controller type converter as shown in Example 36.2, "Getting a Controller Type Converter" . Example 36.2. Getting a Controller Type Converter Type converter loader The controller type converter uses a type converter loader to populate the registry of worker type converters. A type converter loader is any class that implements the TypeConverterLoader interface. Apache Camel currently uses only one kind of type converter loader - the annotation type converter loader (of AnnotationTypeConverterLoader type). Type conversion process Figure 36.1, "Type Conversion Process" gives an overview of the type conversion process, showing the steps involved in converting a given data value, value , to a specified type, toType . Figure 36.1. Type Conversion Process The type conversion mechanism proceeds as follows: The CamelContext object holds a reference to the controller TypeConverter instance. The first step in the conversion process is to retrieve the controller type converter by calling CamelContext.getTypeConverter() . Type conversion is initiated by calling the convertTo() method on the controller type converter. This method instructs the type converter to convert the data object, value , from its original type to the type specified by the toType argument. Because the controller type converter is a front end for many different worker type converters, it looks up the appropriate worker type converter by checking a registry of type mappings The registry of type converters is keyed by a type mapping pair ( toType , fromType ) . If a suitable type converter is found in the registry, the controller type converter calls the worker's convertTo() method and returns the result. If a suitable type converter cannot be found in the registry, the controller type converter loads a new type converter, using the type converter loader. The type converter loader searches the available JAR libraries on the classpath to find a suitable type converter. Currently, the loader strategy that is used is implemented by the annotation type converter loader, which attempts to load a class annotated by the org.apache.camel.Converter annotation. See the section called "Create a TypeConverter file" . If the type converter loader is successful, a new worker type converter is loaded and entered into the type converter registry. This type converter is then used to convert the value argument to the toType type. If the data is successfully converted, the converted data value is returned. If the conversion does not succeed, null is returned. 36.2. Handling Duplicate Type Converters You can configure what must happen if a duplicate type converter is added. In the TypeConverterRegistry (See Section 36.3, "Implementing Type Converter Using Annotations" ) you can set the action to Override , Ignore or Fail using the following code: Override in this code can be replaced by Ignore or Fail , depending on your requirements. TypeConverterExists Class The TypeConverterExists class consists of the following commands: 36.3. Implementing Type Converter Using Annotations Overview The type conversion mechanism can easily be customized by adding a new worker type converter. This section describes how to implement a worker type converter and how to integrate it with Apache Camel, so that it is automatically loaded by the annotation type converter loader. How to implement a type converter To implement a custom type converter, perform the following steps: the section called "Implement an annotated converter class" . the section called "Create a TypeConverter file" . the section called "Package the type converter" . Implement an annotated converter class You can implement a custom type converter class using the @Converter annotation. You must annotate the class itself and each of the static methods intended to perform type conversion. Each converter method takes an argument that defines the from type, optionally takes a second Exchange argument, and has a non-void return value that defines the to type. The type converter loader uses Java reflection to find the annotated methods and integrate them into the type converter mechanism. Example 36.3, "Example of an Annotated Converter Class" shows an example of an annotated converter class that defines a converter method for converting from java.io.File to java.io.InputStream and another converter method (with an Exchange argument) for converting from byte[] to String . Example 36.3. Example of an Annotated Converter Class The toInputStream() method is responsible for performing the conversion from the File type to the InputStream type and the toString() method is responsible for performing the conversion from the byte[] type to the String type. Note The method name is unimportant, and can be anything you choose. What is important are the argument type, the return type, and the presence of the @Converter annotation. Create a TypeConverter file To enable the discovery mechanism (which is implemented by the annotation type converter loader ) for your custom converter, create a TypeConverter file at the following location: The TypeConverter file must contain a comma-separated list of Fully Qualified Names (FQN) of type converter classes. For example, if you want the type converter loader to search the YourPackageName . YourClassName package for annotated converter classes, the TypeConverter file would have the following contents: An alternative method of enabling the discovery mechanism is to add just package names to the TypeConverter file. For example, the TypeConverter file would have the following contents: This would cause the package scanner to scan through the packages for the @Converter tag. Using the FQN method is faster and is the preferred method. Package the type converter The type converter is packaged as a JAR file containing the compiled classes of your custom type converters and the META-INF directory. Put this JAR file on your classpath to make it available to your Apache Camel application. Fallback converter method In addition to defining regular converter methods using the @Converter annotation, you can optionally define a fallback converter method using the @FallbackConverter annotation. The fallback converter method will only be tried, if the controller type converter fails to find a regular converter method in the type registry. The essential difference between a regular converter method and a fallback converter method is that whereas a regular converter is defined to perform conversion between a specific pair of types (for example, from byte[] to String ), a fallback converter can potentially perform conversion between any pair of types. It is up to the code in the body of the fallback converter method to figure out which conversions it is able to perform. At run time, if a conversion cannot be performed by a regular converter, the controller type converter iterates through every available fallback converter until it finds one that can perform the conversion. The method signature of a fallback converter can have either of the following forms: Where MethodName is an arbitrary method name for the fallback converter. For example, the following code extract (taken from the implementation of the File component) shows a fallback converter that can convert the body of a GenericFile object, exploiting the type converters already available in the type converter registry: 36.4. Implementing a Type Converter Directly Overview Generally, the recommended way to implement a type converter is to use an annotated class, as described in the section, Section 36.3, "Implementing Type Converter Using Annotations" . But if you want to have complete control over the registration of your type converter, you can implement a custom worker type converter and add it directly to the type converter registry, as described here. Implement the TypeConverter interface To implement your own type converter class, define a class that implements the TypeConverter interface. For example, the following MyOrderTypeConverter class converts an integer value to a MyOrder object, where the integer value is used to initialize the order ID in the MyOrder object. Add the type converter to the registry You can add the custom type converter directly to the type converter registry using code like the following: Where context is the current org.apache.camel.CamelContext instance. The addTypeConverter() method registers the MyOrderTypeConverter class against the specific type conversion, from String.class to MyOrder.class . You can add custom type converters to your Camel applications without having to use the META-INF file. If you are using Spring or Blueprint , then you can just declare a <bean>. CamelContext discovers the bean automatically and adds the converters. You can declare multiple <bean>s if you have more classes.
[ "package org.apache.camel; public interface TypeConverter { <T> T convertTo(Class<T> type, Object value); }", "org.apache.camel.TypeConverter tc = exchange.getContext().getTypeConverter();", "typeconverterregistry = camelContext.getTypeConverter() // Define the behaviour if the TypeConverter already exists typeconverterregistry.setTypeConverterExists(TypeConverterExists.Override);", "package org.apache.camel; import javax.xml.bind.annotation.XmlEnum; /** * What to do if attempting to add a duplicate type converter * * @version */ @XmlEnum public enum TypeConverterExists { Override, Ignore, Fail }", "package com. YourDomain . YourPackageName ; import org.apache.camel. Converter ; import java.io.*; @Converter public class IOConverter { private IOConverter() { } @Converter public static InputStream toInputStream(File file) throws FileNotFoundException { return new BufferedInputStream(new FileInputStream(file)); } @Converter public static String toString(byte[] data, Exchange exchange) { if (exchange != null) { String charsetName = exchange.getProperty(Exchange.CHARSET_NAME, String.class); if (charsetName != null) { try { return new String(data, charsetName); } catch (UnsupportedEncodingException e) { LOG.warn(\"Can't convert the byte to String with the charset \" + charsetName, e); } } } return new String(data); } }", "META-INF/services/org/apache/camel/TypeConverter", "com. PackageName . FooClass", "com. PackageName", "// 1. Non-generic form of signature @FallbackConverter public static Object MethodName ( Class type, Exchange exchange, Object value, TypeConverterRegistry registry ) // 2. Templating form of signature @FallbackConverter public static <T> T MethodName ( Class<T> type, Exchange exchange, Object value, TypeConverterRegistry registry )", "package org.apache.camel.component.file; import org.apache.camel. Converter ; import org.apache.camel. FallbackConverter ; import org.apache.camel.Exchange; import org.apache.camel.TypeConverter; import org.apache.camel.spi.TypeConverterRegistry; @Converter public final class GenericFileConverter { private GenericFileConverter() { // Helper Class } @FallbackConverter public static <T> T convertTo(Class<T> type, Exchange exchange, Object value, TypeConverterRegistry registry) { // use a fallback type converter so we can convert the embedded body if the value is GenericFile if (GenericFile.class.isAssignableFrom(value.getClass())) { GenericFile file = (GenericFile) value; Class from = file.getBody().getClass(); TypeConverter tc = registry.lookup(type, from); if (tc != null) { Object body = file.getBody(); return tc.convertTo(type, exchange, body); } } return null; } }", "import org.apache.camel.TypeConverter private class MyOrderTypeConverter implements TypeConverter { public <T> T convertTo(Class<T> type, Object value) { // converter from value to the MyOrder bean MyOrder order = new MyOrder(); order.setId(Integer.parseInt(value.toString())); return (T) order; } public <T> T convertTo(Class<T> type, Exchange exchange, Object value) { // this method with the Exchange parameter will be preferd by Camel to invoke // this allows you to fetch information from the exchange during convertions // such as an encoding parameter or the likes return convertTo(type, value); } public <T> T mandatoryConvertTo(Class<T> type, Object value) { return convertTo(type, value); } public <T> T mandatoryConvertTo(Class<T> type, Exchange exchange, Object value) { return convertTo(type, value); } }", "// Add the custom type converter to the type converter registry context.getTypeConverterRegistry().addTypeConverter(MyOrder.class, String.class, new MyOrderTypeConverter());", "<bean id=\"myOrderTypeConverters\" class=\"...\"/> <camelContext> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/typeconv
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-standalone-multicloud-object-gateway
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on bare metal. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preface-baremetal
Chapter 13. Configuring RBAC policies
Chapter 13. Configuring RBAC policies 13.1. Overview of RBAC policies Role-based access control (RBAC) policies in OpenStack Networking allow granular control over shared neutron networks. OpenStack Networking uses a RBAC table to control sharing of neutron networks among projects, allowing an administrator to control which projects are granted permission to attach instances to a network. As a result, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project. 13.2. Creating RBAC policies This example procedure demonstrates how to use a role-based access control (RBAC) policy to grant a project access to a shared network. View the list of available networks: View the list of projects: Create a RBAC entry for the web-servers network that grants access to the auditors project ( 4b0b98f8c6c040f38ba4f7146e8680f5 ): As a result, users in the auditors project can connect instances to the web-servers network. 13.3. Reviewing RBAC policies Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac-show command to view the details of a specific RBAC entry: 13.4. Deleting RBAC policies Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac delete command to delete the RBAC, using the ID of the RBAC that you want to delete: 13.5. Granting RBAC policy access for external networks You can grant role-based access control (RBAC) policy access to external networks (networks with gateway interfaces attached) using the --action access_as_external parameter. Complete the steps in the following example procedure to create a RBAC for the web-servers network and grant access to the engineering project (c717f263785d4679b16a122516247deb): Create a new RBAC policy using the --action access_as_external option: As a result, users in the engineering project are able to view the network or connect instances to it:
[ "openstack network list +--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+", "openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+", "openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+", "openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709 +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+", "openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+", "openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709 Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709", "openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+", "openstack network list +--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-rbac
Chapter 1. Upgrade overview
Chapter 1. Upgrade overview The upgrade procedure for Red Hat Quay depends on the type of installation you are using. The Red Hat Quay Operator provides a simple method to deploy and manage a Red Hat Quay cluster. This is the preferred procedure for deploying Red Hat Quay on OpenShift. The Red Hat Quay Operator should be upgraded using the Operator Lifecycle Manager (OLM) as described in the section "Upgrading Quay using the Quay Operator". The procedure for upgrading a proof of concept or highly available installation of Red Hat Quay and Clair is documented in the section "Standalone upgrade".
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/upgrade_red_hat_quay/upgrade_overview
Chapter 5. Securing Multicloud Object Gateway
Chapter 5. Securing Multicloud Object Gateway 5.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security. 5.1.1. Resetting the noobaa account password Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure To reset the noobaa account password, run the following command: Example: Example output: Important To access the admin account credentials run the noobaa status command from the terminal: 5.1.2. Regenerating the S3 credentials for the accounts Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure Get the account name. For listing the accounts, run the following command: Example output: Alternatively, run the oc get noobaaaccount command from the terminal: Example output: To regenerate the noobaa account S3 credentials, run the following command: Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.1.3. Regenerating the S3 credentials for the OBC Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure To get the OBC name, run the following command: Example output: Alternatively, run the oc get obc command from the terminal: Example output: To regenerate the noobaa OBC S3 credentials, run the following command: Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.2. Enabling secured mode deployment for Multicloud Object Gateway You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services. Note You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster custom resource definition (CRD) while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP . For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface . For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . Prerequisites A running OpenShift Data Foundation cluster. In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services. Procedure Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation. noobaa The NooBaa CR type that controls the NooBaa system deployment. noobaa The name of the NooBaa CR. For example: loadBalancerSourceSubnets A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services. In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access. Verification steps To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG:
[ "noobaa account passwd <noobaa_account_name> [options]", "noobaa account passwd FATA[0000] ❌ Missing expected arguments: <noobaa_account_name> Options: --new-password='': New Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t he shell history --old-password='': Old Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history Usage: noobaa account passwd <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa account passwd [email protected]", "Enter old-password: [got 24 characters] Enter new-password: [got 7 characters] Enter retype-new-password: [got 7 characters] INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✅ Exists: NooBaa \"noobaa\" INFO[0017] ✅ Exists: Service \"noobaa-mgmt\" INFO[0017] ✅ Exists: Secret \"noobaa-operator\" INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✈\\ufe0f RPC: account.reset_password() Request: {Email:[email protected] VerificationPassword: * Password: *} WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0 INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms INFO[0020] ✅ Updated: \"noobaa-admin\" INFO[0020] ✅ Successfully reset the password for the account \"[email protected]\"", "-------------------- - Mgmt Credentials - -------------------- email : [email protected] password : ***", "noobaa account list", "NAME DEFAULT_RESOURCE PHASE AGE account-test noobaa-default-backing-store Ready 14m17s test2 noobaa-default-backing-store Ready 3m12s", "oc get noobaaaccount", "NAME PHASE AGE account-test Ready 15m test2 Ready 3m59s", "noobaa account regenerate <noobaa_account_name> [options]", "noobaa account regenerate FATA[0000] ❌ Missing expected arguments: <noobaa-account-name> Usage: noobaa account regenerate <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa account regenerate account-test", "INFO[0000] You are about to regenerate an account's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n", "INFO[0015] ✅ Exists: Secret \"noobaa-account-account-test\" Connection info: AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : ***", "noobaa obc list", "NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE default obc-test obc-test-35800e50-8978-461f-b7e0-7793080e26ba default.noobaa.io noobaa-default-bucket-class Bound", "oc get obc", "NAME STORAGE-CLASS PHASE AGE obc-test default.noobaa.io Bound 38s", "noobaa obc regenerate <bucket_claim_name> [options]", "noobaa obc regenerate FATA[0000] ❌ Missing expected arguments: <bucket-claim-name> Usage: noobaa obc regenerate <bucket-claim-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa obc regenerate obc-test", "INFO[0000] You are about to regenerate an OBC's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n", "INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms ObjectBucketClaim info: Phase : Bound ObjectBucketClaim : kubectl get -n default objectbucketclaim obc-test ConfigMap : kubectl get -n default configmap obc-test Secret : kubectl get -n default secret obc-test ObjectBucket : kubectl get objectbucket obc-default-obc-test StorageClass : kubectl get storageclass default.noobaa.io BucketClass : kubectl get -n default bucketclass noobaa-default-bucket-class Connection info: BUCKET_HOST : s3.default.svc BUCKET_NAME : obc-test-35800e50-8978-461f-b7e0-7793080e26ba BUCKET_PORT : 443 AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : *** Shell commands: AWS S3 Alias : alias s3='AWS_ACCESS_KEY_ID=*** AWS_SECRET_ACCESS_KEY =*** aws s3 --no-verify-ssl --endpoint-url ***' Bucket status: Name : obc-test-35800e50-8978-461f-b7e0-7793080e26ba Type : REGULAR Mode : OPTIMAL ResiliencyStatus : OPTIMAL QuotaStatus : QUOTA_NOT_SET Num Objects : 0 Data Size : 0.000 B Data Size Reduced : 0.000 B Data Space Avail : 13.261 GB Num Objects Avail : 9007199254740991", "oc edit noobaa -n openshift-storage noobaa", "spec: loadBalancerSourceSubnets: s3: [\"10.0.0.0/16\", \"192.168.10.0/32\"] sts: - \"10.0.0.0/16\" - \"192.168.10.0/32\"", "oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/securing-multicloud-object-gateway
4.46. dhcp
4.46. dhcp 4.46.1. RHSA-2011:1819 - Moderate: dhcp security update Updated dhcp packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. Security Fix CVE-2011-4539 A denial of service flaw was found in the way the dhcpd daemon handled DHCP request packets when regular expression matching was used in "/etc/dhcp/dhcpd.conf". A remote attacker could use this flaw to crash dhcpd. Users of DHCP should upgrade to these updated packages, which contain a backported patch to correct this issue. After installing this update, all DHCP servers will be restarted automatically. 4.46.2. RHBA-2011:1597 - dhcp bug fix and enhancement update Updated dhcp packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. DHCPv6 is the DHCP protocol that supports IPv6 networks. Bug Fixes BZ# 694798 Previously, when multiple DHCP clients were launched at the same time to handle multiple virtual interfaces on the same network interface card (NIC), the clients used the same seed to choose when to renew their leases. Consequently, these virtual interfaces for some clients could have been removed over time. With this update, the dhclient utility uses the Process Identifier (PID) for seeding the random number generator, which fixes the bug. BZ# 694799 If a system was rebooted while a network switch was inoperative, the network connection would recover successfully. However, it was no longer configured to use DHCP even if the dhclient utility had been running in persistent mode. With this update, the dhclient-script file has been modified to refresh the ARP (Address Resolution Protocol) table and the routing table instead of bringing the interface down, which fixes the bug. BZ# 731990 If the system included network interfaces with no hardware address, the dhcpd scan could have experienced a segmentation fault when scanning such an interface. As a consequence, the dhcpd daemon unexpectedly terminated. To prevent this issue, dhcpd now tests a pointer which represents the hardware address of the interface for the NULL value. The dhcp daemon no longer crashes. BZ# 736999 Previously, all source files were compiled with the "-fpie" or "fPIE" flag. As a consequence, the libraries used by dhcp could not have been used to build Perl modules. To fix this problem, all respective dhcp Makefiles have been modified to compile libraries with the "-fpic" or "-fPIC" flag. The libraries used by dhcp are now built without the restrictions. BZ# 736194 Previously, both dhcp and dhclient packages included the dhcp-options(5) and dhcp-eval(5) man pages. As a consequence, a conflict could have occurred when any of these man pages were updated, because dhcp and dhclient packages could have been upgraded separately. To prevent the problem from occurring in future updates, shared files of dhcp and dhclient packages have been moved to the dhcp-common package that is required by both dhcp and dhclient as a dependency. Enhancements BZ# 706974 A feature has been backported from dhcp version 4.2.0. This feature allows the DHCPv6 server to be configured to identify DHCPv6 clients in accordance with their link-layer address and their network hardware type. With this update, it is now possible to define a static IPv6 address for the DHCPv6 client with a known link-layer address. BZ# 693381 Previously, the dhcpd daemon ran as root. With this update, new "-user" and "-group" options can be used with dhcpd. These options allow dhcpd to change the effective user and group ID after it starts. The dhcpd and dhcpd6 services now run the dhcpd daemon with the "-user dhcpd -group dhcpd" parameters, which means that the dhcpd daemon runs as the dhcpd user and group instead root. Users are advised to upgrade to these updated dhcp packages, which fixes these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/dhcp
Chapter 3. Configuring SSO for Argo CD using Keycloak
Chapter 3. Configuring SSO for Argo CD using Keycloak After the Red Hat OpenShift GitOps Operator is installed, Argo CD automatically creates a user with admin permissions. To manage multiple users, cluster administrators can use Argo CD to configure Single Sign-On (SSO). 3.1. Prerequisites Red Hat SSO is installed on the cluster. The Red Hat OpenShift GitOps Operator is installed on your OpenShift Container Platform cluster. Argo CD is installed on the cluster. The DeploymentConfig API is available in the cluster. For more information, see "DeploymentConfig [apps.openshift.io/v1]". 3.2. Configuring a new client in Keycloak Dex is installed by default for all the Argo CD instances created by the Operator. However, you can delete the Dex configuration and add Keycloak instead to log in to Argo CD using your OpenShift credentials. Keycloak acts as an identity broker between Argo CD and OpenShift. Procedure To configure Keycloak, follow these steps: Delete the Dex configuration by removing the .spec.sso.dex parameter from the Argo CD custom resource (CR), and save the CR: dex: openShiftOAuth: true resources: limits: cpu: memory: requests: cpu: memory: Set the value of the provider parameter to keycloak in the Argo CD CR. Configure Keycloak by performing one of the following steps: For a secure connection, set the value of the rootCA parameter as shown in the following example: apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: sso: provider: keycloak keycloak: rootCA: "<PEM-encoded-root-certificate>" 1 server: route: enabled: true 1 A custom certificate used to verify the Keycloak's TLS certificate. The Operator reconciles changes in the .spec.sso.keycloak.rootCA parameter and updates the oidc.config parameter with the PEM encoded root certificate in the argocd-cm configuration map. For an insecure connection, leave the value of the rootCA parameter empty and use the oidc.tls.insecure.skip.verify parameter as shown below: apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: extraConfig: oidc.tls.insecure.skip.verify: "true" sso: provider: keycloak keycloak: rootCA: "" Optional: Customize the spec.sso.keycloak field to add the route name for the keycloak provider in the ArgoCD CR. Use this feature to support advanced routing use cases, such as balancing incoming traffic load among multiple Ingress Controller sharding . Add a host parameter in the ArgoCD CR by using the following example YAML: Example ArgoCD CR apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: <resource_name> 1 labels: example: route spec: sso: provider: keycloak keycloak: host: <hostname> 2 server: ingress: enabled: true insecure: true 1 Replace <resource_name> with the name of the ArgoCD CR. 2 Replace <hostname> with the name of the host key, for example, sso.test.example.com . To create the ArgoCD CR , run the following command: USD oc create -f <argocd_filename>.yaml -n <your-namespace> To edit the ArgoCD CR , run the following command: USD oc edit -f <argocd_filename>.yaml -n <your_namespace> Save the file to apply the changes. To apply the ArgoCD CR, run the following command: USD oc apply -f <argocd_filename>.yaml -n <your_namespace> Verify that the host attribute is added by running the following command: USD oc get route keycloak -n <your_namespace> -o yaml Example output kind: Route metadata: name: keycloak 1 labels: application: keycloak spec: host: sso.test.example.com status: ingress: - host: sso.test.example.com 2 1 Specifies the name of the route. 2 Specifies the name of the host key. Note The Keycloak instance takes 2-3 minutes to install and run. 3.3. Logging in to Keycloak Log in to the Keycloak console to manage identities or roles and define the permissions assigned to the various roles. Prerequisites The default configuration of Dex is removed. Your Argo CD CR must be configured to use the Keycloak SSO provider. Procedure Get the Keycloak route URL for login: USD oc -n argocd get route keycloak NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD keycloak keycloak-default.apps.ci-ln-******.origin-ci-int-aws.dev.**.com keycloak <all> reencrypt None Get the Keycloak pod name that stores the user name and password as environment variables: USD oc -n argocd get pods NAME READY STATUS RESTARTS AGE keycloak-1-2sjcl 1/1 Running 0 45m Get the Keycloak user name: USD oc -n argocd exec keycloak-1-2sjcl -- "env" | grep SSO_ADMIN_USERNAME SSO_ADMIN_USERNAME=Cqid54Ih Get the Keycloak password: USD oc -n argocd exec keycloak-1-2sjcl -- "env" | grep SSO_ADMIN_PASSWORD SSO_ADMIN_PASSWORD=GVXxHifH On the login page, click LOG IN VIA KEYCLOAK . Note You only see the option LOGIN VIA KEYCLOAK after the Keycloak instance is ready. Click Login with OpenShift . Note Login using kubeadmin is not supported. Enter the OpenShift credentials to log in. Optional: By default, any user logged in to Argo CD has read-only access. You can manage the user level access by updating the argocd-rbac-cm config map: policy.csv: <name>, <email>, role:admin 3.4. Uninstalling Keycloak You can delete the Keycloak resources and their relevant configurations by removing the SSO field from the Argo CD Custom Resource (CR) file. After you remove the SSO field, the values in the file look similar to the following: apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: server: route: enabled: true Note A Keycloak application created by using this method is currently not persistent. Additional configurations created in the Argo CD Keycloak realm are deleted when the server restarts. 3.5. Modifying Keycloak resource requests/limits By default, the Keycloak container is created with resource requests and limitations. You can change and manage the resource requests. Resource Requests Limits CPU 500 1000m Memory 512 Mi 1024 Mi Procedure Modify the default resource requirements patching the Argo CD custom resource (CR): USD oc -n openshift-gitops patch argocd openshift-gitops --type='json' -p='[{"op": "add", "path": "/spec/sso", "value": {"provider": "keycloak", "resources": {"requests": {"cpu": "512m", "memory": "512Mi"}, "limits": {"cpu": "1024m", "memory": "1024Mi"}} }}]' Note Keycloak created by the Red Hat OpenShift GitOps only persists the changes that are made by the operator. If the Keycloak restarts, any additional configuration created by the Admin in Keycloak is deleted.
[ "dex: openShiftOAuth: true resources: limits: cpu: memory: requests: cpu: memory:", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: sso: provider: keycloak keycloak: rootCA: \"<PEM-encoded-root-certificate>\" 1 server: route: enabled: true", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: extraConfig: oidc.tls.insecure.skip.verify: \"true\" sso: provider: keycloak keycloak: rootCA: \"\"", "apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: <resource_name> 1 labels: example: route spec: sso: provider: keycloak keycloak: host: <hostname> 2 server: ingress: enabled: true insecure: true", "oc create -f <argocd_filename>.yaml -n <your-namespace>", "oc edit -f <argocd_filename>.yaml -n <your_namespace>", "oc apply -f <argocd_filename>.yaml -n <your_namespace>", "oc get route keycloak -n <your_namespace> -o yaml", "kind: Route metadata: name: keycloak 1 labels: application: keycloak spec: host: sso.test.example.com status: ingress: - host: sso.test.example.com 2", "oc -n argocd get route keycloak NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD keycloak keycloak-default.apps.ci-ln-******.origin-ci-int-aws.dev.**.com keycloak <all> reencrypt None", "oc -n argocd get pods NAME READY STATUS RESTARTS AGE keycloak-1-2sjcl 1/1 Running 0 45m", "oc -n argocd exec keycloak-1-2sjcl -- \"env\" | grep SSO_ADMIN_USERNAME SSO_ADMIN_USERNAME=Cqid54Ih", "oc -n argocd exec keycloak-1-2sjcl -- \"env\" | grep SSO_ADMIN_PASSWORD SSO_ADMIN_PASSWORD=GVXxHifH", "policy.csv: <name>, <email>, role:admin", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: server: route: enabled: true", "oc -n openshift-gitops patch argocd openshift-gitops --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/sso\", \"value\": {\"provider\": \"keycloak\", \"resources\": {\"requests\": {\"cpu\": \"512m\", \"memory\": \"512Mi\"}, \"limits\": {\"cpu\": \"1024m\", \"memory\": \"1024Mi\"}} }}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/access_control_and_user_management/configuring-sso-for-argo-cd-using-keycloak
Chapter 11. Using service accounts in applications
Chapter 11. Using service accounts in applications 11.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods Applications inside containers to make API calls for discovery purposes External applications to make API calls for monitoring or integration purposes Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 11.2. Default service accounts Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project. 11.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 11.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. Note The builder service account is not created if the Build cluster capability is not enabled. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. Note The deployer service account is not created if the DeploymentConfig cluster capability is not enabled. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any image stream in the project using the internal container image registry. 11.2.3. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.16, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 11.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/using-service-accounts
Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton
Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton You can migrate your CI/CD workflows from Jenkins to Red Hat OpenShift Pipelines , a cloud-native CI/CD experience based on the Tekton project. 3.1. Comparison of Jenkins and OpenShift Pipelines concepts You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines. 3.1.1. Jenkins terminology Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows: Pipeline : Automates the entire process of building, testing, and deploying applications by using Groovy syntax. Node : A machine capable of either orchestrating or executing a scripted pipeline. Stage : A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks. Step : A single task that specifies the exact action to be taken, either by using a command or a script. 3.1.2. OpenShift Pipelines terminology OpenShift Pipelines uses YAML syntax for declarative pipelines and consists of tasks. Some basic terms in OpenShift Pipelines are as follows: Pipeline : A set of tasks in a series, in parallel, or both. Task : A sequence of steps as commands, binaries, or scripts. PipelineRun : Execution of a pipeline with one or more tasks. TaskRun : Execution of a task with one or more steps. Note You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts. Workspace : In OpenShift Pipelines, workspaces are conceptual blocks that serve the following purposes: Storage of inputs, outputs, and build artifacts. Common space to share data among tasks. Mount points for credentials held in secrets, configurations held in config maps, and common tools shared by an organization. Note In Jenkins, there is no direct equivalent of OpenShift Pipelines workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. 3.1.3. Mapping of concepts The building blocks of Jenkins and OpenShift Pipelines are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and OpenShift Pipelines correlate in general: Table 3.1. Jenkins and OpenShift Pipelines - basic comparison Jenkins OpenShift Pipelines Pipeline Pipeline and PipelineRun Stage Task Step A step in a task 3.2. Migrating a sample pipeline from Jenkins to OpenShift Pipelines You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to OpenShift Pipelines. 3.2.1. Jenkins pipeline Consider a Jenkins pipeline written in Groovy for building, testing, and deploying: pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } } 3.2.2. OpenShift Pipelines pipeline To create a pipeline in OpenShift Pipelines that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: Example build task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: ["make"] workingDir: USD(workspaces.source.path) Example test task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: ["make check"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path) Example deploy task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: ["make deploy"] workingDir: USD(workspaces.source.path) You can combine the three tasks sequentially to form a pipeline in OpenShift Pipelines: Example: OpenShift Pipelines pipeline for building, testing, and deployment apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir 3.3. Migrating from Jenkins plugins to Tekton Hub tasks You can extend the capability of Jenkins by using plugins . To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from Tekton Hub . For example, consider the git-clone task in Tekton Hub, which corresponds to the git plugin for Jenkins. Example: git-clone task from Tekton Hub apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source 3.4. Extending OpenShift Pipelines capabilities using custom tasks and scripts In OpenShift Pipelines, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of OpenShift Pipelines. Example: A custom task for running the maven test command apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: ["mvn test"] workingDir: USD(workspaces.source.path) Example: Run a custom shell script by providing its path ... steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh ... Example: Run a custom Python script by writing it in the YAML file ... steps: image: python script: | #!/usr/bin/env python3 print("hello from python!") ... 3.5. Comparison of Jenkins and OpenShift Pipelines execution models Jenkins and OpenShift Pipelines offer similar functions but are different in architecture and execution. Table 3.2. Comparison of execution models in Jenkins and OpenShift Pipelines Jenkins OpenShift Pipelines Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes. OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution. Containers are launched by the Jenkins controller node through the pipeline. OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). Extensibility is achieved by using plugins. Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts. 3.6. Examples of common use cases Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as: Compiling, building, and deploying images using Apache Maven Extending the core capabilities by using plugins Reusing shareable libraries and custom scripts 3.6.1. Running a Maven pipeline in Jenkins and OpenShift Pipelines You can use Maven in both Jenkins and OpenShift Pipelines workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to OpenShift Pipelines, consider the following examples: Example: Compile and build an image and deploy it to OpenShift using Maven in Jenkins #!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' } Example: Compile and build an image and deploy it to OpenShift using Maven in OpenShift Pipelines. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: "USD(params.repo-url)" - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["-DskipTests", "clean", "compile"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["test"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["package"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd "USD(params.context-path)" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json 3.6.2. Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the Jenkins Plugin Index . OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the Tekton Hub . In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the Role-based Authorization Strategy plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system. 3.6.3. Sharing reusable code in Jenkins and OpenShift Pipelines Jenkins shared libraries provide reusable code for parts of Jenkins pipelines. The libraries are shared between Jenkinsfiles to create highly modular pipelines without code repetition. Although there is no direct equivalent of Jenkins shared libraries in OpenShift Pipelines, you can achieve similar workflows by using tasks from the Tekton Hub in combination with custom tasks and scripts. 3.7. Additional resources Understanding OpenShift Pipelines Role-based Access Control
[ "pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)", "steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh", "steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")", "#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/jenkins/migrating-from-jenkins-to-openshift-pipelines_images-other-jenkins-agent
14.8. Starting, Suspending, Resuming, Saving, and Restoring a Guest Virtual Machine
14.8. Starting, Suspending, Resuming, Saving, and Restoring a Guest Virtual Machine This section provides information on starting, suspending, resuming, saving, and restoring guest virtual machines. 14.8.1. Starting a Defined Domain The virsh start domain --console --paused --autodestroy --bypass-cache --force-boot --pass-fds command starts a inactive domain that was already defined but whose state is inactive since its last managed save state or a fresh boot. The command can take the following options: --console - will boot the domain attaching to the console --paused - If this is supported by the driver it will boot the domain and then put it into a paused state --autodestroy - the guest virtual machine is automatically destroyed when the virsh session closes or the connection to libvirt closes, or it otherwise exits --bypass-cache - used if the domain is in the managedsave state. If this is used, it will restore the guest virtual machine, avoiding the system cache. Note this will slow down the restore process. --force-boot - discards any managedsave options and causes a fresh boot to occur --pass-fds - is a list of additional options separated by commas, which are passed onto the guest virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine
Chapter 1. Preparing your Environment for Installation
Chapter 1. Preparing your Environment for Installation Before you install Satellite, ensure that your environment meets the following requirements. 1.1. System Requirements The following requirements apply to the networked base operating system: x86_64 architecture The latest version of Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 Server 4-core 2.0 GHz CPU at a minimum A minimum of 20 GB RAM is required for Satellite Server to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. Satellite running with less RAM than the minimum value might not operate correctly. A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-) A current Red Hat Satellite subscription Administrative user (root) access A system umask of 0022 Full forward and reverse DNS resolution using a fully-qualified domain name Satellite only supports UTF-8 encoding. If your territory is USA and your language is English, set en_US.utf-8 as the system-wide locale settings. For more information about configuring system locale in Red Hat Enterprise Linux, see Configuring System Locale guide . Your Satellite must have the Red Hat Satellite Infrastructure Subscription manifest in your Customer Portal. Satellite must have satellite-capsule-6.x repository enabled and synced. To create, manage, and export a Red Hat Subscription Manifest in the Customer Portal, see Creating and managing manifests for a connected Satellite Server in Subscription Central . Satellite Server and Capsule Server do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a Satellite. Before you install Satellite Server, ensure that your environment meets the requirements for installation. Satellite Server must be installed on a freshly provisioned system that serves no other function except to run Satellite Server. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that Satellite Server creates: apache foreman foreman-proxy postgres pulp puppet qdrouterd qpidd redis tomcat Certified hypervisors Satellite Server is fully supported on both physical systems and virtual machines that run on hypervisors that are supported to run Red Hat Enterprise Linux. For more information about certified hypervisors, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat OpenShift Virtualization and Red Hat Enterprise Linux with KVM . SELinux Mode SELinux must be enabled, either in enforcing or permissive mode. Installation with disabled SELinux is not supported. FIPS Mode You can install Satellite on a Red Hat Enterprise Linux system that is operating in FIPS mode. You cannot enable FIPS mode after the installation of Satellite. For more information, see Installing a RHEL 8 system with FIPS mode enabled in the Red Hat Enterprise Linux Security Hardening Guide . For more information about FIPS on Red Hat Enterprise Linux 7 systems, see Enabling FIPS Mode in the Red Hat Enterprise Linux Security Guide . Note Satellite supports DEFAULT and FIPS crypto-policies. The FUTURE crypto-policy is not supported for Satellite and Capsule installations. Inter-Satellite Synchronization (ISS) In a scenario with air-gapped Satellite Servers, all your Satellite Servers must be on the same Satellite version for ISS Export Sync to work. ISS Network Sync works across all Satellite versions that support it. For more information, see Synchronizing Content Between Satellite Servers in Managing Content . 1.2. Storage Requirements Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 The following table details storage requirements for specific directories. These values are based on expected use case scenarios and can vary according to individual environments. The runtime size was measured with Red Hat Enterprise Linux 6, 7, and 8 repositories synchronized. 1.2.1. Red Hat Enterprise Linux 8 Table 1.1. Storage Requirements for a Satellite Server Installation Directory Installation Size Runtime Size /var/log 10 MB 10 GB /var/lib/pgsql 100 MB 20 GB /usr 5 GB Not Applicable /opt/puppetlabs 500 MB Not Applicable /var/lib/pulp 1 MB 300 GB /var/lib/qpidd 25 MB Refer Storage Guidelines For external database servers: /var/lib/pgsql with installation size of 100 MB and runtime size of 20 GB. For detailed information on partitioning and size, refer to the Red Hat Enterprise Linux 8 partitioning guide . 1.2.2. Red Hat Enterprise Linux 7 Table 1.2. Storage Requirements for a Satellite Server Installation Directory Installation Size Runtime Size /var/log 10 MB 10 GB /var/opt/rh/rh-postgresql12 100 MB 20 GB /usr 3 GB Not Applicable /opt 3 GB Not Applicable /opt/puppetlabs 500 MB Not Applicable /var/lib/pulp 1 MB 300 GB /var/lib/qpidd 25 MB Refer Storage Guidelines For external database servers: /var/lib/pgsql with installation size of 100 MB and runtime size of 20 GB. 1.3. Storage Guidelines Consider the following guidelines when installing Satellite Server to increase efficiency. If you mount the /tmp directory as a separate file system, you must use the exec mount option in the /etc/fstab file. If /tmp is already mounted with the noexec option, you must change the option to exec and re-mount the file system. This is a requirement for the puppetserver service to work. Because most Satellite Server data is stored in the /var directory, mounting /var on LVM storage can help the system to scale. The /var/lib/qpidd/ directory uses slightly more than 2 MB per Content Host managed by the goferd service. For example, 10 000 Content Hosts require 20 GB of disk space in /var/lib/qpidd/ . Use high-bandwidth, low-latency storage for the /var/lib/pulp/ directories. As Red Hat Satellite has many operations that are I/O intensive, using high latency, low-bandwidth storage causes performance degradation. Ensure your installation has a speed in the range 60 - 80 Megabytes per second. You can use the storage-benchmark script to get this data. For more information on using the storage-benchmark script, see Impact of Disk Speed on Satellite Operations . File System Guidelines Do not use the GFS2 file system as the input-output latency is too high. Log File Storage Log files are written to /var/log/messages/, /var/log/httpd/ , and /var/lib/foreman-proxy/openscap/content/ . You can manage the size of these files using logrotate . For more information, see Log Rotation in the Red Hat Enterprise Linux 7 System Administrator's Guide . The exact amount of storage you require for log messages depends on your installation and setup. SELinux Considerations for NFS Mount When the /var/lib/pulp directory is mounted using an NFS share, SELinux blocks the synchronization process. To avoid this, specify the SELinux context of the /var/lib/pulp directory in the file system table by adding the following lines to /etc/fstab : If NFS share is already mounted, remount it using the above configuration and enter the following command: Duplicated Packages Packages that are duplicated in different repositories are only stored once on the disk. Additional repositories containing duplicate packages require less additional storage. The bulk of storage resides in the /var/lib/pulp/ directory. These end points are not manually configurable. Ensure that storage is available on the /var file system to prevent storage problems. Software Collections Software collections are installed in the /opt/rh/ and /opt/theforeman/ directories. Write and execute permissions by the root user are required for installation to the /opt directory. Symbolic links You cannot use symbolic links for /var/lib/pulp/ . Synchronized RHEL ISO If you plan to synchronize RHEL content ISOs to Satellite, note that all minor versions of Red Hat Enterprise Linux also synchronize. You must plan to have adequate storage on your Satellite to manage this. 1.4. Supported Operating Systems You can install the operating system from a disc, local ISO image, kickstart, or any other method that Red Hat supports. Red Hat Satellite Server is supported on the latest versions of Red Hat Enterprise Linux 8, and Red Hat Enterprise Linux 7 Server that are available at the time when Satellite Server is installed. versions of Red Hat Enterprise Linux including EUS or z-stream are not supported. The following operating systems are supported by the installer, have packages, and are tested for deploying Satellite: Table 1.3. Operating Systems supported by satellite-installer Operating System Architecture Notes Red Hat Enterprise Linux 8 x86_64 only Red Hat Enterprise Linux 7 x86_64 only Before you install Satellite, apply all operating system updates if possible. Red Hat Satellite Server requires a Red Hat Enterprise Linux installation with the @Base package group with no other package-set modifications, and without third-party configurations or software not directly necessary for the direct operation of the server. This restriction includes hardening and other non-Red Hat security software. If you require such software in your infrastructure, install and verify a complete working Satellite Server first, then create a backup of the system before adding any non-Red Hat software. Install Satellite Server on a freshly provisioned system. Red Hat does not support using the system for anything other than running Satellite Server. 1.5. Supported Browsers Satellite supports recent versions of Firefox and Google Chrome browsers. The Satellite web UI and command-line interface support English, Portuguese, Simplified Chinese Traditional Chinese, Korean, Japanese, Italian, Spanish, Russian, French, and German. 1.6. Ports and Firewalls Requirements For the components of Satellite architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls. Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol. Integrated Capsule Satellite Server has an integrated Capsule and any host that is directly connected to Satellite Server is a Client of Satellite in the context of this section. This includes the base operating system on which Capsule Server is running. Clients of Capsule Hosts which are clients of Capsules, other than Satellite's integrated Capsule, do not need access to Satellite Server. For more information on Satellite Topology and an illustration of port connections, see Capsule Networking in Planning for Red Hat Satellite . Required ports can change based on your configuration. The following tables indicate the destination port and the direction of network traffic: Table 1.4. Satellite Server incoming traffic Destination Port Protocol Service Source Required For Description 53 TCP and UDP DNS DNS Servers and clients Name resolution DNS (optional) 67 UDP DHCP Client Dynamic IP DHCP (optional) 69 UDP TFTP Client TFTP Server (optional) 443 TCP HTTPS Capsule Red Hat Satellite API Communication from Capsule 443, 80 TCP HTTPS, HTTP Client Content Retrieval Content 443, 80 TCP HTTPS, HTTP Capsule Content Retrieval Content 443, 80 TCP HTTPS, HTTP Client Content Host Registration Capsule CA RPM installation 443 TCP HTTPS Client Content Host registration Initiation Uploading facts Sending installed packages and traces 443 TCP HTTPS Red Hat Satellite Content Mirroring Management 443 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 5646 TCP AMQP Capsule Katello agent Forward message to Qpid dispatch router on Satellite (optional) 5910 - 5930 TCP HTTPS Browsers Compute Resource's virtual console 8000 TCP HTTP Client Provisioning templates Template retrieval for client installers, iPXE or UEFI HTTP Boot 8000 TCP HTTPS Client PXE Boot Installation 8140 TCP HTTPS Client Puppet agent Client updates (optional) 9090 TCP HTTPS Client OpenSCAP Configure Client 9090 TCP HTTPS Discovered Node Discovery Host discovery and provisioning 9090 TCP HTTPS Red Hat Satellite Capsule API Capsule functionality Any managed host that is directly connected to Satellite Server is a client in this context because it is a client of the integrated Capsule. This includes the base operating system on which a Capsule Server is running. A DHCP Capsule performs ICMP ping or TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free. This behavior can be turned off using satellite-installer --foreman-proxy-dhcp-ping-free-ip=false . Note Some outgoing traffic returns to Satellite to enable internal communication and security operations. Table 1.5. Satellite Server outgoing traffic Destination Port Protocol Service Destination Required For Description ICMP ping Client DHCP Free IP checking (optional) 7 TCP echo Client DHCP Free IP checking (optional) 22 TCP SSH Target host Remote execution Run jobs 22, 16514 TCP SSH SSH/TLS Compute Resource Satellite originated communications, for compute resources in libvirt 53 TCP and UDP DNS DNS Servers on the Internet DNS Server Resolve DNS records (optional) 53 TCP and UDP DNS DNS Server Capsule DNS Validation of DNS conflicts (optional) 53 TCP and UDP DNS DNS Server Orchestration Validation of DNS conflicts 68 UDP DHCP Client Dynamic IP DHCP (optional) 80 TCP HTTP Remote repository Content Sync Remote yum repository 389, 636 TCP LDAP, LDAPS External LDAP Server LDAP LDAP authentication, necessary only if external authentication is enabled. The port can be customized when LDAPAuthSource is defined 443 TCP HTTPS Satellite Capsule Capsule Configuration management Template retrieval OpenSCAP Remote Execution result upload 443 TCP HTTPS Amazon EC2, Azure, Google GCE Compute resources Virtual machine interactions (query/create/destroy) (optional) 443 TCP HTTPS console.redhat.com Red Hat Cloud plugin API calls 443 TCP HTTPS cdn.redhat.com Content Sync Red Hat CDN 443 TCP HTTPS api.access.redhat.com SOS report Assisting support cases filed through the Red Hat Customer Portal (optional) 443 TCP HTTPS cert-api.access.redhat.com Telemetry data upload and report 443 TCP HTTPS Capsule Content mirroring Initiation 443 TCP HTTPS Infoblox DHCP Server DHCP management When using Infoblox for DHCP, management of the DHCP leases (optional) 623 Client Power management BMC On/Off/Cycle/Status 5000 TCP HTTPS OpenStack Compute Resource Compute resources Virtual machine interactions (query/create/destroy) (optional) 5646 TCP AMQP Satellite Server Katello agent Forward message to Qpid dispatch router on Capsule (optional) 5671 Qpid Remote install Send install command to client 5671 Dispatch router (hub) Remote install Forward message to dispatch router on Satellite 5671 Satellite Server Remote install for Katello agent Send install command to client 5671 Satellite Server Remote install for Katello agent Forward message to dispatch router on Satellite 5900 - 5930 TCP SSL/TLS Hypervisor noVNC console Launch noVNC console 7911 TCP DHCP, OMAPI DHCP Server DHCP The DHCP target is configured using --foreman-proxy-dhcp-server and defaults to localhost ISC and remote_isc use a configurable port that defaults to 7911 and uses OMAPI 8443 TCP HTTPS Client Discovery Capsule sends reboot command to the discovered host (optional) 9090 TCP HTTPS Capsule Capsule API Management of Capsules 1.7. Enabling Connections from a Client to Satellite Server Capsules and Content Hosts that are clients of a Satellite Server's internal Capsule require access through Satellite's host-based firewall and any network-based firewalls. Use this procedure to configure the host-based firewall on the system that Satellite is installed on, to enable incoming connections from Clients, and to make the configuration persistent across system reboots. For more information on the ports used, see Ports and Firewalls Requirements . Procedure To open the ports for client to Satellite communication, enter the following command on the base operating system that you want to install Satellite on: Make the changes persistent: Verification Enter the following command: For more information, see Using and Configuring firewalld in the Red Hat Enterprise Linux 8 Security Guide , and Getting Started with firewalld in the Red Hat Enterprise Linux 7 Security Guide . 1.8. Verifying DNS resolution Verify the full forward and reverse DNS resolution using a fully-qualified domain name to prevent issues while installing Satellite. Procedure Ensure that the host name and local host resolve correctly: Successful name resolution results in output similar to the following: To avoid discrepancies with static and transient host names, set all the host names on the system by entering the following command: For more information, see the Configuring Host Names Using hostnamectl in the Red Hat Enterprise Linux 7 Networking Guide . Warning Name resolution is critical to the operation of Satellite. If Satellite cannot properly resolve its fully qualified domain name, tasks such as content management, subscription management, and provisioning will fail. 1.9. Tuning Satellite Server with Predefined Profiles If your Satellite deployment includes more than 5000 hosts, you can use predefined tuning profiles to improve performance of Satellite. Note that you cannot use tuning profiles on Capsules. You can choose one of the profiles depending on the number of hosts your Satellite manages and available hardware resources. The tuning profiles are available in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes directory. When you run the satellite-installer command with the --tuning option, deployment configuration settings are applied to Satellite in the following order: The default tuning profile defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml file The tuning profile that you want to apply to your deployment and is defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ directory Optional: If you have configured a /etc/foreman-installer/custom-hiera.yaml file, Satellite applies these configuration settings. Note that the configuration settings that are defined in the /etc/foreman-installer/custom-hiera.yaml file override the configuration settings that are defined in the tuning profiles. Therefore, before applying a tuning profile, you must compare the configuration settings that are defined in the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml , the tuning profile that you want to apply and your /etc/foreman-installer/custom-hiera.yaml file, and remove any duplicated configuration from the /etc/foreman-installer/custom-hiera.yaml file. default Number of managed hosts: 0 - 5000 RAM: 20G Number of CPU cores: 4 medium Number of managed hosts: 5001 - 10000 RAM: 32G Number of CPU cores: 8 large Number of managed hosts: 10001 - 20000 RAM: 64G Number of CPU cores: 16 extra-large Number of managed hosts: 20001 - 60000 RAM: 128G Number of CPU cores: 32 extra-extra-large Number of managed hosts: 60000+ RAM: 256G Number of CPU cores: 48+ Procedure Optional: If you have configured the custom-hiera.yaml file on Satellite Server, back up the /etc/foreman-installer/custom-hiera.yaml file to custom-hiera.original . You can use the backup file to restore the /etc/foreman-installer/custom-hiera.yaml file to its original state if it becomes corrupted: Optional: If you have configured the custom-hiera.yaml file on Satellite Server, review the definitions of the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml and the tuning profile that you want to apply in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ . Compare the configuration entries against the entries in your /etc/foreman-installer/custom-hiera.yaml file and remove any duplicated configuration settings in your /etc/foreman-installer/custom-hiera.yaml file. Enter the satellite-installer command with the --tuning option for the profile that you want to apply. For example, to apply the medium tuning profile settings, enter the following command:
[ "nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2", "restorecon -R /var/lib/pulp", "firewall-cmd --add-port=\"53/udp\" --add-port=\"53/tcp\" --add-port=\"67/udp\" --add-port=\"69/udp\" --add-port=\"80/tcp\" --add-port=\"443/tcp\" --add-port=\"5647/tcp\" --add-port=\"8000/tcp\" --add-port=\"9090/tcp\" --add-port=\"8140/tcp\"", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all", "ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com", "ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms", "hostnamectl set-hostname name", "cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original", "satellite-installer --tuning medium" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/Preparing_your_Environment_for_Installation_satellite
Validation and troubleshooting
Validation and troubleshooting OpenShift Container Platform 4.18 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.31.3 control-plane-1.example.com Ready master 41m v1.31.3 control-plane-2.example.com Ready master 45m v1.31.3 compute-2.example.com Ready worker 38m v1.31.3 compute-3.example.com Ready worker 33m v1.31.3 control-plane-3.example.com Ready master 41m v1.31.3", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "journalctl -b -f -u bootkube.service", "for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done", "tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log", "journalctl -b -f -u kubelet.service -u crio.service", "sudo tail -f /var/log/containers/*", "oc adm node-logs --role=master -u kubelet", "oc adm node-logs --role=master --path=openshift-apiserver", "cat ~/<installation_directory>/.openshift_install.log 1", "./openshift-install create cluster --dir <installation_directory> --log-level debug 1", "./openshift-install destroy cluster --dir <installation_directory> 1", "rm -rf <installation_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/validation_and_troubleshooting/index
Chapter 2. The Ceph File System Metadata Server
Chapter 2. The Ceph File System Metadata Server Additional Resources As a storage administrator, you can learn about the different states of the Ceph File System (CephFS) Metadata Server (MDS), along with learning about CephFS MDS ranking mechanics, configuring the MDS standby daemon, and cache size limits. Knowing these concepts can enable you to configure the MDS daemons for a storage environment. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Metadata Server daemons ( ceph-mds ). See the Management of MDS service using the Ceph Orchestrator section in the Red Hat Ceph Storage File System Guide for details on configuring MDS daemons. 2.1. Metadata Server daemon states The Metadata Server (MDS) daemons operate in two states: Active - manages metadata for files and directories stores on the Ceph File System. Standby - serves as a backup, and becomes active when an active MDS daemon becomes unresponsive. By default, a Ceph File System uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. You can configure the file system to use multiple active MDS daemons so that you can scale metadata performance for larger workloads. The active MDS daemons dynamically share the metadata workload when metadata load patterns change. Note that systems with multiple active MDS daemons still require standby MDS daemons to remain highly available. What Happens When the Active MDS Daemon Fails When the active MDS becomes unresponsive, a Ceph Monitor daemon waits a number of seconds equal to the value specified in the mds_beacon_grace option. If the active MDS is still unresponsive after the specified time period has passed, the Ceph Monitor marks the MDS daemon as laggy . One of the standby daemons becomes active, depending on the configuration. Note To change the value of mds_beacon_grace , add this option to the Ceph configuration file and specify the new value. 2.2. Metadata Server ranks Each Ceph File System (CephFS) has a number of ranks, one by default, which starts at zero. Ranks define how the metadata workload is shared between multiple Metadata Server (MDS) daemons. The number of ranks is the maximum number of MDS daemons that can be active at one time. Each MDS daemon handles a subset of the CephFS metadata that is assigned to that rank. Each MDS daemon initially starts without a rank. The Ceph Monitor assigns a rank to the daemon. The MDS daemon can only hold one rank at a time. Daemons only lose ranks when they are stopped. The max_mds setting controls how many ranks will be created. The actual number of ranks in the CephFS is only increased if a spare daemon is available to accept the new rank. Rank States Ranks can be: Up - A rank that is assigned to the MDS daemon. Failed - A rank that is not associated with any MDS daemon. Damaged - A rank that is damaged; its metadata is corrupted or missing. Damaged ranks are not assigned to any MDS daemons until the operator fixes the problem, and uses the ceph mds repaired command on the damaged rank. 2.3. Metadata Server cache size limits You can limit the size of the Ceph File System (CephFS) Metadata Server (MDS) cache by: A memory limit : Use the mds_cache_memory_limit option. Red Hat recommends a value between 8 GB and 64 GB for mds_cache_memory_limit . Setting more cache can cause issues with recovery. This limit is approximately 66% of the desired maximum memory use of the MDS. Note The default value for mds_cache_memory_limit is 4 GB. Since the default value is outside the recommended range, Red Hat recommends setting the value within the mentioned range. Important Red Hat recommends using memory limits instead of inode count limits. Inode count : Use the mds_cache_size option. By default, limiting the MDS cache by inode count is disabled. In addition, you can specify a cache reservation by using the mds_cache_reservation option for MDS operations. The cache reservation is limited as a percentage of the memory or inode limit and is set to 5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for its cache for new metadata operations to use. As a consequence, the MDS should in general operate below its memory limit because it will recall old state from clients to drop unused metadata in its cache. The mds_cache_reservation option replaces the mds_health_cache_threshold option in all situations, except when MDS nodes send a health alert to the Ceph Monitors indicating the cache is too large. By default, mds_health_cache_threshold is 150% of the maximum cache size. Be aware that the cache limit is not a hard limit. Potential bugs in the CephFS client or MDS or misbehaving applications might cause the MDS to exceed its cache size. The mds_health_cache_threshold option configures the storage cluster health warning message, so that operators can investigate why the MDS cannot shrink its cache. Additional Resources See the Metadata Server daemon configuration reference section in the Red Hat Ceph Storage File System Guide for more information. 2.4. File system affinity You can configure a Ceph File System (CephFS) to prefer a particular Ceph Metadata Server (MDS) over another Ceph MDS. For example, you have MDS running on newer, faster hardware that you want to give preference to over a standby MDS running on older, maybe slower hardware. You can specify this preference by setting the mds_join_fs option, which enforces this file system affinity. Ceph Monitors give preference to MDS standby daemons with mds_join_fs equal to the file system name with the failed rank. The standby-replay daemons are selected before choosing another standby daemon. If no standby daemon exists with the mds_join_fs option, then the Ceph Monitors will choose an ordinary standby for replacement or any other available standby as a last resort. The Ceph Monitors will periodically examine Ceph File Systems to see if a standby with a stronger affinity is available to replace the Ceph MDS that has a lower affinity. Additional Resources See the Configuring file system affinity section in the Red Hat Ceph Storage File System Guide for details. 2.5. Management of MDS service using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy the MDS service. By default, a Ceph File System (CephFS) uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. This section covers the following administrative tasks: Deploying the MDS service using the command line interface . Deploying the MDS service using the service specification . Removing the MDS service using the Ceph Orchestrator . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. 2.5.1. Deploying the MDS service using the command line interface Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement specification in the command line interface. Ceph File System (CephFS) requires one or more MDS. Note Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example There are two ways of deploying MDS daemons using placement specification: Method 1 Use ceph fs volume to create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts. Syntax Note By default, replicated pools are created for this command. Example Method 2 Create the pools, CephFS, and then deploy MDS service using placement specification: Create the pools for CephFS: Syntax Example Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system. Important For the metadata pool, consider to use: A higher replication level because any data loss to this pool can make the whole file system inaccessible. Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients. Create the file system for the data pools and metadata pools: Syntax Example Deploy MDS service using the ceph orch apply command: Syntax Example Verification List the service: Example Check the CephFS status: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). For information on setting the pool values, see Setting number of placement groups in a pool . 2.5.2. Deploying the MDS service using the service specification Using the Ceph Orchestrator, you can deploy the MDS service using the service specification. Note Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one for the CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Create the mds.yaml file: Example Edit the mds.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Log into the Cephadm shell: Example Navigate to the following directory: Example Deploy MDS service using service specification: Syntax Example Once the MDS services is deployed and functional, create the CephFS: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). 2.5.3. Removing the MDS service using the Ceph Orchestrator You can remove the service using the ceph orch rm command. Alternatively, you can remove the file system and the associated pools. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one MDS daemon deployed on the hosts. Procedure There are two ways of removing MDS daemons from the cluster: Method 1 Remove the CephFS volume, associated pools, and the services: Log into the Cephadm shell: Example Set the configuration parameter mon_allow_pool_delete to true : Example Remove the file system: Syntax Example This command will remove the file system, its data, and metadata pools. It also tries to remove the MDS using the enabled ceph-mgr Orchestrator module. Method 2 Use the ceph orch rm command to remove the MDS service from the entire cluster: List the service: Example Remove the service Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the MDS service using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the MDS service using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 2.6. Configuring file system affinity Set the Ceph File System (CephFS) affinity for a particular Ceph Metadata Server (MDS). Prerequisites A healthy, and running Ceph File System. Root-level access to a Ceph Monitor node. Procedure Check the current state of a Ceph File System: Example Set the file system affinity: Syntax Example After a Ceph MDS failover event, the file system favors the standby daemon for which the affinity is set. Example 1 The mds.b daemon now has the join_fscid=27 in the file system dump output. Important If a file system is in a degraded or undersized state, then no failover will occur to enforce the file system affinity. Additional Resources See the File system affinity section in the Red Hat Ceph Storage File System Guide for more details. 2.7. Configuring multiple active Metadata Server daemons Configure multiple active Metadata Server (MDS) daemons to scale metadata performance for large systems. Important Do not convert all standby MDS daemons to active ones. A Ceph File System (CephFS) requires at least one standby MDS daemon to remain highly available. Prerequisites Ceph administration capabilities on the MDS node. Root-level access to a Ceph Monitor node. Procedure Set the max_mds parameter to the desired number of active MDS daemons: Syntax Example This example increases the number of active MDS daemons to two in the CephFS called cephfs Note Ceph only increases the actual number of ranks in the CephFS if a spare MDS daemon is available to take the new rank. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide for more details. See the Decreasing the number of active MDS Daemons section in the Red Hat Ceph Storage File System Guide for more details. See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details. 2.8. Configuring the number of standby daemons Each Ceph File System (CephFS) can specify the required number of standby daemons to be considered healthy. This number also includes the standby-replay daemon waiting for a rank failure. Prerequisites Root-level access to a Ceph Monitor node. Procedure Set the expected number of standby daemons for a particular CephFS: Syntax Note Setting the NUMBER to zero disables the daemon health check. Example This example sets the expected standby daemon count to two. 2.9. Configuring the standby-replay Metadata Server Configure each Ceph File System (CephFS) by adding a standby-replay Metadata Server (MDS) daemon. Doing this reduces failover time if the active MDS becomes unavailable. This specific standby-replay daemon follows the active MDS's metadata journal. The standby-replay daemon is only used by the active MDS of the same rank, and is not available to other ranks. Important If using standby-replay, then every active MDS must have a standby-replay daemon. Prerequisites Root-level access to a Ceph Monitor node. Procedure Set the standby-replay for a particular CephFS: Syntax Example In this example, the Boolean value is 1 , which enables the standby-replay daemons to be assigned to the active Ceph MDS daemons. Additional Resources See the Using the ceph mds fail command section in the Red Hat Ceph Storage File System Guide for details. 2.10. Ephemeral pinning policies An ephemeral pin is a static partition of subtrees, and can be set with a policy using extended attributes. A policy can automatically set ephemeral pins to directories. When setting an ephemeral pin to a directory, it is automatically assigned to a particular rank, as to be uniformly distributed across all Ceph MDS ranks. Determining which rank gets assigned is done by a consistent hash and the directory's inode number. Ephemeral pins do not persist when the directory's inode is dropped from file system cache. When failing over a Ceph Metadata Server (MDS), the ephemeral pin is recorded in its journal so the Ceph MDS standby server does not lose this information. There are two types of policies for using ephemeral pins: Note The attr and jq packages must be installed as a prerequisite for the ephemeral pinning policies. Distributed This policy enforces that all of a directory's immediate children must be ephemerally pinned. For example, use a distributed policy to spread a user's home directory across the entire Ceph File System cluster. Enable this policy by setting the ceph.dir.pin.distributed extended attribute. Syntax Example Random This policy enforces a chance that any descendent subdirectory might be ephemerally pinned. You can customize the percent of directories that can be ephemerally pinned. Enable this policy by setting the ceph.dir.pin.random and setting a percentage. Red Hat recommends setting this percentage to a value smaller than 1% ( 0.01 ). Having too many subtree partitions can cause slow performance. You can set the maximum percentage by setting the mds_export_ephemeral_random_max Ceph MDS configuration option. The parameters mds_export_ephemeral_distributed and mds_export_ephemeral_random are already enabled. Syntax Example After enabling pinning, you can verify by running either of the following commands: Syntax Example Example If the directory is pinned, the value of export_pin is 0 if it is pinned to rank 0 , 1 if it is pinned to rank 1 , and so on. If the directory is not pinned, the value is -1 . To remove a partitioning policy, remove the extended attributes or set the value to 0 . Syntax Example You can verify by running either of the following commands .Syntax Example For export pins, remove the extended attribute or set the extended attribute to -1 . Syntax Example Additional Resources See the Manually pinning directory trees to a particular rank section in the Red Hat Ceph Storage File System Guide for details on manually setting pins. 2.11. Manually pinning directory trees to a particular rank Sometimes it might be desirable to override the dynamic balancer with explicit mappings of metadata to a particular Ceph Metadata Server (MDS) rank. You can do this manually to evenly spread the load of an application or to limit the impact of users' metadata requests on the Ceph File System cluster. Manually pinning directories is also known as an export pin by setting the ceph.dir.pin extended attribute. A directory's export pin is inherited from its closest parent directory, but can be overwritten by setting an export pin on that directory. Setting an export pin on a directory affects all of its sub-directories, for example: 1 Directories a/ and a/b both start without an export pin set. 2 Directories a/ and a/b are now pinned to rank 1 . 3 Directory a/b is now pinned to rank 0 and directory a/ and the rest of its sub-directories are still pinned to rank 1 . Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph File System. Root-level access to the CephFS client. Installation of the attr package. Procedure Set the export pin on a directory: Syntax Example Additional Resources See the Ephemeral pinning policies section in the Red Hat Ceph Storage File System Guide for details on automatically setting pins. 2.12. Decreasing the number of active Metadata Server daemons How to decrease the number of active Ceph File System (CephFS) Metadata Server (MDS) daemons. Prerequisites The rank that you will remove must be active first, meaning that you must have the same number of MDS daemons as specified by the max_mds parameter. Root-level access to a Ceph Monitor node. Procedure Set the same number of MDS daemons as specified by the max_mds parameter: Syntax Example On a node with administration capabilities, change the max_mds parameter to the desired number of active MDS daemons: Syntax Example Wait for the storage cluster to stabilize to the new max_mds value by watching the Ceph File System status. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide . See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide . See the Red Hat Ceph Storage Installation Guide for details on installing a Red Hat Ceph Storage cluster. 2.13. Viewing metrics for Ceph metadata server clients You can use the command-line interface to view the metrics for the Ceph metadata server (MDS). CephFS uses Perf Counters to track metrics. You can view the metrics using the counter dump command. Prequisites A running IBM Storage Ceph cluster. Procedure Get the name of the mds service: Syntax Check the MDS per client metrics: Syntax Example Client metrics description CephFS exports client metrics as Labeled Perf Counters, which you can use to monitor the client performance. CephFS exports the below client metrics: NAME TYPE DESCRIPTION cap_hits Gauge Percentage of file capability hits over total number of caps. cap_miss Gauge Percentage of file capability misses over total number of caps. avg_read_latency Gauge Mean value of the read latencies. avg_write_latency Gauge Mean value of the write latencies. avg_metadata_latency Gauge Mean value of the metadata latencies dentry_lease_hits Gauge Percentage of dentry lease hits handed out over the total dentry lease request. dentry_lease_miss Gauge Percentage of dentry lease misses handed out over the total dentry lease requests. opened_files Gauge Number of opened files. opened_inodes Gauge Number of opened inode. pinned_icaps Gauge Number of pinned Inode Caps. total_inodes Gauge Total number of Nodes. total_read_ops Gauge Total number of read operations generated by all process. total_read_size Gauge Number of bytes read in input/output operations generated by all process. total_write_ops Gauge Total number of write operations generated by all process. total_write_size Gauge Number of bytes written in input/output operations generated by all processes.
[ "cephadm shell", "ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph fs volume create test --placement=\"2 host01 host02\"", "ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]", "ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64", "ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL", "ceph fs new test cephfs_metadata cephfs_data", "ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mds test --placement=\"2 host01 host02\"", "ceph orch ls", "ceph fs ls ceph fs status", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "touch mds.yaml", "service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: mds service_id: fs_name placement: hosts: - host01 - host02", "cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml", "cd /var/lib/ceph/mds/", "cephadm shell", "cd /var/lib/ceph/mds/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mds.yaml", "ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL", "ceph fs new test metadata_pool data_pool", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "cephadm shell", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it", "ceph fs volume rm cephfs-new --yes-i-really-mean-it", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm mds.test", "ceph orch ps", "ceph orch ps", "ceph fs dump dumped fsmap epoch 399 Filesystem 'cephfs01' (27) e399 max_mds 1 in 0 up {0=20384} failed damaged stopped [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]] Standby daemons: [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]", "ceph config set STANDBY_DAEMON mds_join_fs FILE_SYSTEM_NAME", "ceph config set mds.b mds_join_fs cephfs01", "ceph fs dump dumped fsmap epoch 405 e405 Filesystem 'cephfs01' (27) max_mds 1 in 0 up {0=10420} failed damaged stopped [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]] 1 Standby daemons: [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 2", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | STANDBY MDS | +-------------+ | node3 | +-------------+", "ceph fs set FS_NAME standby_count_wanted NUMBER", "ceph fs set cephfs standby_count_wanted 2", "ceph fs set FS_NAME allow_standby_replay 1", "ceph fs set cephfs allow_standby_replay 1", "setfattr -n ceph.dir.pin.distributed -v 1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 1 dir1/", "setfattr -n ceph.dir.pin.random -v PERCENTAGE_IN_DECIMAL DIRECTORY_PATH", "setfattr -n ceph.dir.pin.random -v 0.01 dir1/", "getfattr -n ceph.dir.pin.random DIRECTORY_PATH getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/ file: dir1/ ceph.dir.pin.distributed=\"1\" getfattr -n ceph.dir.pin.random dir1/ file: dir1/ ceph.dir.pin.random=\"0.01\"", "ceph tell mds.a get subtrees | jq '.[] | [.dir.path, .auth_first, .export_pin]'", "setfattr -n ceph.dir.pin.distributed -v 0 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 0 dir1/", "getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/", "setfattr -n ceph.dir.pin -v -1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin -v -1 dir1/", "mkdir -p a/b 1 setfattr -n ceph.dir.pin -v 1 a/ 2 setfattr -n ceph.dir.pin -v 0 a/b 3", "setfattr -n ceph.dir.pin -v RANK PATH_TO_DIRECTORY", "setfattr -n ceph.dir.pin -v 2 cephfs/home", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 1", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------|--------+ +-----------------+----------+-------+-------+ | POOl | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+", "ceph orch ps | grep mds", "ceph tell MDS_SERVICE_NAME counter dump", "ceph tell mds.cephfs.ceph2-hk-n-0mfqao-node4.isztbk counter dump [ { \"key\": \"mds_client_metrics\", \"value\": [ { \"labels\": { \"fs_name\": \"cephfs\", \"id\": \"24379\" }, \"counters\": { \"num_clients\": 4 } } ] }, { \"key\": \"mds_client_metrics-cephfs\", \"value\": [ { \"labels\": { \"client\": \"client.24413\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } }, { \"labels\": { \"client\": \"client.24502\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 921403, \"cap_miss\": 102382, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17117, \"dentry_lease_miss\": 204710, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24508\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 928694, \"cap_miss\": 103183, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17217, \"dentry_lease_miss\": 206348, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24520\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } } ] } ]" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/the-ceph-file-system-metadata-server
Chapter 71. service
Chapter 71. service This chapter describes the commands under the service command. 71.1. service create Create new service Usage: Table 71.1. Positional arguments Value Summary <type> New service type (compute, image, identity, volume, etc) Table 71.2. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New service name --description <description> New service description --enable Enable service (default) --disable Disable service Table 71.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 71.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.2. service delete Delete service(s) Usage: Table 71.7. Positional arguments Value Summary <service> Service(s) to delete (type, name or id) Table 71.8. Command arguments Value Summary -h, --help Show this help message and exit 71.3. service list List services Usage: Table 71.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 71.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 71.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 71.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.4. service provider create Create new service provider Usage: Table 71.14. Positional arguments Value Summary <name> New service provider name (must be unique) Table 71.15. Command arguments Value Summary -h, --help Show this help message and exit --auth-url <auth-url> Authentication url of remote federated service provider (required) --description <description> New service provider description --service-provider-url <sp-url> A service url where saml assertions are being sent (required) --enable Enable the service provider (default) --disable Disable the service provider Table 71.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 71.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.5. service provider delete Delete service provider(s) Usage: Table 71.20. Positional arguments Value Summary <service-provider> Service provider(s) to delete Table 71.21. Command arguments Value Summary -h, --help Show this help message and exit 71.6. service provider list List service providers Usage: Table 71.22. Command arguments Value Summary -h, --help Show this help message and exit Table 71.23. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 71.24. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 71.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.7. service provider set Set service provider properties Usage: Table 71.27. Positional arguments Value Summary <service-provider> Service provider to modify Table 71.28. Command arguments Value Summary -h, --help Show this help message and exit --auth-url <auth-url> New authentication url of remote federated service provider --description <description> New service provider description --service-provider-url <sp-url> New service provider url, where saml assertions are sent --enable Enable the service provider --disable Disable the service provider 71.8. service provider show Display service provider details Usage: Table 71.29. Positional arguments Value Summary <service-provider> Service provider to display Table 71.30. Command arguments Value Summary -h, --help Show this help message and exit Table 71.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 71.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.9. service set Set service properties Usage: Table 71.35. Positional arguments Value Summary <service> Service to modify (type, name or id) Table 71.36. Command arguments Value Summary -h, --help Show this help message and exit --type <type> New service type (compute, image, identity, volume, etc) --name <service-name> New service name --description <description> New service description --enable Enable service --disable Disable service 71.10. service show Display service details Usage: Table 71.37. Positional arguments Value Summary <service> Service to display (type, name or id) Table 71.38. Command arguments Value Summary -h, --help Show this help message and exit Table 71.39. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 71.40. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.41. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.42. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack service create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--enable | --disable] <type>", "openstack service delete [-h] <service> [<service> ...]", "openstack service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long]", "openstack service provider create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --auth-url <auth-url> [--description <description>] --service-provider-url <sp-url> [--enable | --disable] <name>", "openstack service provider delete [-h] <service-provider> [<service-provider> ...]", "openstack service provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack service provider set [-h] [--auth-url <auth-url>] [--description <description>] [--service-provider-url <sp-url>] [--enable | --disable] <service-provider>", "openstack service provider show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <service-provider>", "openstack service set [-h] [--type <type>] [--name <service-name>] [--description <description>] [--enable | --disable] <service>", "openstack service show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <service>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/service
function::atomic_long_read
function::atomic_long_read Name function::atomic_long_read - Retrieves an atomic long variable from kernel memory Synopsis Arguments addr pointer to atomic long variable Description Safely perform the read of an atomic long variable. This will be a NOP on kernels that do not have ATOMIC_LONG_INIT set on the kernel config.
[ "atomic_long_read:long(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-atomic-long-read
Appendix A. Mod_proxy connector modules
Appendix A. Mod_proxy connector modules The mod_proxy connector comprises a set of standard Apache HTTP Server modules. These modules enable the Apache HTTP Server to act as a proxy/gateway for sending web traffic between web clients and back-end servers over different types of protocols. This appendix describes the modules that the mod_proxy connector uses. A.1. Mod_proxy.so module The mod_proxy.so module is a standard Apache HTTP Server module that enables the server to act as a proxy for data transferred over the AJP (Apache JServe Protocol), FTP, CONNECT (for SSL), and HTTP protocols. The mod_proxy module does not require additional configuration. The identifier for the mod_proxy module is proxy_module . Additional resources Apache Module mod_proxy A.2. Mod_proxy_ajp.so module The mod_proxy_ajp.so module is a standard Apache HTTP Server module that provides support for Apache JServ Protocol (AJP) proxying. By using the mod_proxy_ajp module, the Apache HTTP Server acts as an intermediary for sending AJP requests and responses between web clients and back-end servers. AJP is a clear-text protocol that does not support data encryption. The mod_proxy module is also required if you want to use mod_proxy_ajp . The identifier for the mod_proxy_ajp module is proxy_ajp_module . Additionally, the secret property is required when using the Tomcat AJP Connector. You can add the secret property to the ProxyPass settings by using the following command: ProxyPass /example/ ajp://localhost:8009/example/ secret=YOUR_AJP_SECRET Note If you set a secret on a load balancer, all of its members inherit this secret . The mod_proxy_ajp module does not provide any configuration directives. Additional resources Apache Module mod_proxy_ajp A.3. Mod_proxy_http.so module The mod_proxy_http.so module is a standard Apache HTTP Server module that provides support for Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol Secure (HTTPS) proxying. By using the mod_proxy_http module, the Apache HTTP Server acts as an intermediary for forwarding HTTP or HTTPS requests between web clients and back-end servers. The mod_proxy_http module supports HTTP/1.1 and earlier versions of the HTTP protocol. The mod_proxy module is also required if you want to use mod_proxy_http . The identifier for the mod_proxy_http module is proxy_http_module . The mod_proxy_http module does not provide any configuration directives. Along with the configuration that controls the behavior of the mod_proxy module, the mod_proxy_http module uses a series of environment variables that control the behavior of the HTTP protocol provider. Additional resources Apache Module mod_proxy_http A.4. Mod_proxy_http2.so module The mod_proxy_http2.so module is a standard Apache HTTP Server module that provides support for Hypertext Transfer Protocol 2.0 (HTTP/2) proxying. By using the mod_proxy_http2 module, the Apache HTTP Server acts as an intermediary for forwarding HTTP/2 requests between web clients and back-end servers. The mod_proxy_http2 module supports client requests that use HTTP/1.1 or HTTP/2 as a communication protocol. However, the mod_proxy_http2 module requires that all communication between the Apache HTTP Server and the back-end server uses HTTP/2 only. For client requests that have the same back-end destination, the Apache HTTP Server reuses the same TCP connection whenever possible. However, even if you want to forward multiple client requests to the same back end, the Apache HTTP Server forwards a separate HTTP/2 proxy request for each HTTP/1.1 client request. The mod_proxy module is also required if you want to use mod_proxy_http2 . The identifier for the mod_proxy_http2 module is proxy_http2_module . The mod_proxy_http2 module does not provide any configuration directives. Note The mod_proxy_http2 module is an experimental Apache feature that requires use of the libnghttp2 library for the core HTTP/2 engine. Additional resources Enabling HTTP/2 for the JBCS Apache HTTP Server Apache Module mod_proxy_http2
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_connectors_and_load_balancing_guide/assembly_mod-proxy-modules
Chapter 75. KafkaClientAuthenticationPlain schema reference
Chapter 75. KafkaClientAuthenticationPlain schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationPlain schema properties To configure SASL-based PLAIN authentication, set the type property to plain . SASL PLAIN authentication mechanism requires a username and password. Warning The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled. 75.1. username Specify the username in the username property. 75.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for PLAIN client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. An example SASL based PLAIN client authentication configuration authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name 75.3. KafkaClientAuthenticationPlain schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationPlain type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationOAuth . It must have the value plain for the type KafkaClientAuthenticationPlain . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be plain . string username Username used for the authentication. string
[ "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm", "authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaclientauthenticationplain-reference
15.2. Migration Requirements and Limitations
15.2. Migration Requirements and Limitations Before using KVM migration, make sure that your system fulfills the migration's requirements, and that you are aware of its limitations. Migration requirements A guest virtual machine installed on shared storage using one of the following protocols: Fibre Channel-based LUNs iSCSI NFS GFS2 SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters Make sure that the libvirtd service is enabled and running. The ability to migrate effectively is dependant on the parameter setting in the /etc/libvirt/libvirtd.conf file. To edit this file, use the following procedure: Procedure 15.1. Configuring libvirtd.conf Opening the libvirtd.conf requires running the command as root: Change the parameters as needed and save the file. Restart the libvirtd service: The migration platforms and versions should be checked against Table 15.1, "Live Migration Compatibility" Use a separate system exporting the shared storage medium. Storage should not reside on either of the two host physical machines used for the migration. Shared storage must mount at the same location on source and destination systems. The mounted directory names must be identical. Although it is possible to keep the images using different paths, it is not recommended. Note that, if you intend to use virt-manager to perform the migration, the path names must be identical. If you intend to use virsh to perform the migration, different network configurations and mount directories can be used with the help of --xml option or pre-hooks . For more information on pre-hooks, see the libvirt upstream documentation , and for more information on the XML option, see Chapter 23, Manipulating the Domain XML . When migration is attempted on an existing guest virtual machine in a public bridge+tap network, the source and destination host machines must be located on the same network. Otherwise, the guest virtual machine network will not operate after migration. Migration Limitations Guest virtual machine migration has the following limitations when used on Red Hat Enterprise Linux with virtualization technology based on KVM: Point to point migration - must be done manually to designate destination hypervisor from originating hypervisor No validation or roll-back is available Determination of target may only be done manually Storage migration cannot be performed live on Red Hat Enterprise Linux 7 , but you can migrate storage while the guest virtual machine is powered down. Live storage migration is available on Red Hat Virtualization . Call your service representative for details. Note If you are migrating a guest machine that has virtio devices on it, make sure to set the number of vectors on any virtio device on either platform to 32 or fewer. For detailed information, see Section 23.17, "Devices" .
[ "systemctl enable libvirtd.service systemctl restart libvirtd.service", "vim /etc/libvirt/libvirtd.conf", "systemctl restart libvirtd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-KVM_live_migration-Live_migration_requirements
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1]
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the dnsRecord. status object status is the most recently observed status of the dnsRecord. 11.1.1. .spec Description spec is the specification of the desired behavior of the dnsRecord. Type object Required dnsManagementPolicy dnsName recordTTL recordType targets Property Type Description dnsManagementPolicy string dnsManagementPolicy denotes the current policy applied on the DNS record. Records that have policy set as "Unmanaged" are ignored by the ingress operator. This means that the DNS record on the cloud provider is not managed by the operator, and the "Published" status condition will be updated to "Unknown" status, since it is externally managed. Any existing record on the cloud provider can be deleted at the discretion of the cluster admin. This field defaults to Managed. Valid values are "Managed" and "Unmanaged". dnsName string dnsName is the hostname of the DNS record recordTTL integer recordTTL is the record TTL in seconds. If zero, the default is 30. RecordTTL will not be used in AWS regions Alias targets, but will be used in CNAME targets, per AWS API contract. recordType string recordType is the DNS record type. For example, "A" or "CNAME". targets array (string) targets are record targets. 11.1.2. .status Description status is the most recently observed status of the dnsRecord. Type object Property Type Description observedGeneration integer observedGeneration is the most recently observed generation of the DNSRecord. When the DNSRecord is updated, the controller updates the corresponding record in each managed zone. If an update for a particular zone fails, that failure is recorded in the status condition for the zone so that the controller can determine that it needs to retry the update for that specific zone. zones array zones are the status of the record in each zone. zones[] object DNSZoneStatus is the status of a record within a specific zone. 11.1.3. .status.zones Description zones are the status of the record in each zone. Type array 11.1.4. .status.zones[] Description DNSZoneStatus is the status of a record within a specific zone. Type object Property Type Description conditions array conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. conditions[] object DNSZoneCondition is just the standard condition fields. dnsZone object dnsZone is the zone where the record is published. 11.1.5. .status.zones[].conditions Description conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. Type array 11.1.6. .status.zones[].conditions[] Description DNSZoneCondition is just the standard condition fields. Type object Required status type Property Type Description lastTransitionTime string message string reason string status string type string 11.1.7. .status.zones[].dnsZone Description dnsZone is the zone where the record is published. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 11.2. API endpoints The following API endpoints are available: /apis/ingress.operator.openshift.io/v1/dnsrecords GET : list objects of kind DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords DELETE : delete collection of DNSRecord GET : list objects of kind DNSRecord POST : create a DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} DELETE : delete a DNSRecord GET : read the specified DNSRecord PATCH : partially update the specified DNSRecord PUT : replace the specified DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status GET : read status of the specified DNSRecord PATCH : partially update status of the specified DNSRecord PUT : replace status of the specified DNSRecord 11.2.1. /apis/ingress.operator.openshift.io/v1/dnsrecords Table 11.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind DNSRecord Table 11.2. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty 11.2.2. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords Table 11.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 11.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DNSRecord Table 11.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNSRecord Table 11.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.8. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty HTTP method POST Description create a DNSRecord Table 11.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.10. Body parameters Parameter Type Description body DNSRecord schema Table 11.11. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 202 - Accepted DNSRecord schema 401 - Unauthorized Empty 11.2.3. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} Table 11.12. Global path parameters Parameter Type Description name string name of the DNSRecord namespace string object name and auth scope, such as for teams and projects Table 11.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DNSRecord Table 11.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.15. Body parameters Parameter Type Description body DeleteOptions schema Table 11.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNSRecord Table 11.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.18. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNSRecord Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.20. Body parameters Parameter Type Description body Patch schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNSRecord Table 11.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.23. Body parameters Parameter Type Description body DNSRecord schema Table 11.24. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty 11.2.4. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status Table 11.25. Global path parameters Parameter Type Description name string name of the DNSRecord namespace string object name and auth scope, such as for teams and projects Table 11.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DNSRecord Table 11.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.28. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNSRecord Table 11.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.30. Body parameters Parameter Type Description body Patch schema Table 11.31. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNSRecord Table 11.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.33. Body parameters Parameter Type Description body DNSRecord schema Table 11.34. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/dnsrecord-ingress-operator-openshift-io-v1
Chapter 2. Installation
Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.5 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 subscriptions listed at https://access.redhat.com/solutions/472793 . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional repository, which are listed in Section 2.1.2, "Packages from the Optional Repository" , cannot be installed from the ISO image. Note Packages that require the Optional repository cannot be installed from the ISO image. A list of packages that require enabling of the Optional repository is provided in Section 2.1.2, "Packages from the Optional Repository" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Repository Some of the Red Hat Software Collections packages require the Optional repository to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this repository, see the relevant Knowledgebase article at https://access.redhat.com/solutions/392003 . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional repository to be enabled are listed in the tables below. Note that packages from the Optional repository are unsupported. For details, see the Knowledgebase article at https://access.redhat.com/articles/1150793 . Table 2.1. Packages That Require Enabling of the Optional Repository in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Repository devtoolset-8-build scl-utils-build devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-gcc-plugin-devel libmpc-devel devtoolset-9-build scl-utils-build devtoolset-9-dyninst-testsuite glibc-static devtoolset-9-gcc-plugin-devel libmpc-devel devtoolset-9-gdb source-highlight httpd24-mod_ldap apr-util-ldap httpd24-mod_session apr-util-openssl rh-git218-git-cvs cvsps rh-git218-git-svn subversion-perl rh-git218-perl-Git-SVN subversion-perl rh-maven35-xpp3-javadoc java-11-openjdk-javadoc rh-php72-php-pspell aspell rh-php73-php-devel pcre2-devel rh-php73-php-pspell aspell rh-python36-python-devel scl-utils-build rh-python36-python-sphinx texlive-threeparttable,texlive-wrapfig,texlive-titlesec,texlive-framed rh-python38-python-devel scl-utils-build Table 2.2. Packages That Require Enabling of the Optional Repository in Red Hat Enterprise Linux 6 Package from a Software Collection Required Package from the Optional Repository devtoolset-8-build scl-utils-build devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-elfutils-devel xz-devel devtoolset-8-gcc-plugin-devel gmp-devel,mpfr-devel devtoolset-8-libatomic-devel libatomic devtoolset-8-libgccjit mpfr rh-mariadb102-mariadb-bench perl-GD rh-mongodb34-boost-devel libicu-devel rh-python36-python-devel scl-utils-build 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.5 requires the removal of any earlier pre-release versions. If you have installed any version of Red Hat Software Collections 2.1 component, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections. The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install php54 and rh-mariadb100 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl526-perl-CPAN and rh-perl526-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby25-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see https://access.redhat.com/solutions/9907 . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide .
[ "rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms>", "~]# yum install rh-php72 rh-mariadb102", "~]# yum install rh-perl526-perl-CPAN rh-perl526-perl-Archive-Tar", "~]# debuginfo-install rh-ruby25-ruby" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-Installation
Preface
Preface Depending on the type of your deployment, you can choose one of the following procedures to replace a storage device: For dynamically created storage clusters deployed on AWS, see: Section 1.1, "Replacing operational or failed storage devices on AWS user-provisioned infrastructure" . Section 1.2, "Replacing operational or failed storage devices on AWS installer-provisioned infrastructure" . For dynamically created storage clusters deployed on VMware, see Section 2.1, "Replacing operational or failed storage devices on VMware infrastructure" . For dynamically created storage clusters deployed on Red Hat Virtualization, see Section 3.1, "Replacing operational or failed storage devices on Red Hat Virtualization installer-provisioned infrastructure" . For dynamically created storage clusters deployed on Microsoft Azure, see Section 4.1, "Replacing operational or failed storage devices on Azure installer-provisioned infrastructure" . For storage clusters deployed using local storage devices, see: Section 5.1, "Replacing operational or failed storage devices on clusters backed by local storage devices" . Section 5.2, "Replacing operational or failed storage devices on IBM Power" . Section 5.3, "Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure" . Note OpenShift Data Foundation does not support heterogeneous OSD sizes.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_devices/preface-replacing-devices
Chapter 17. kubernetes
Chapter 17. kubernetes The namespace for Kubernetes-specific metadata Data type group 17.1. kubernetes.pod_name The name of the pod Data type keyword 17.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 17.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 17.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 17.5. kubernetes.host The Kubernetes node name Data type keyword 17.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 17.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 17.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 17.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 17.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 17.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 17.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 17.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 17.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 17.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 17.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 17.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 17.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 17.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 17.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 17.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 17.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 17.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 17.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 17.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 17.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 17.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 17.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/making-open-source-more-inclusive
Chapter 11. Managing wifi connections
Chapter 11. Managing wifi connections RHEL provides multiple utilities and applications to configure and connect to wifi networks, for example: Use the nmcli utility to configure connections by using the command line. Use the nmtui application to configure connections in a text-based user interface. Use the GNOME system menu to quickly connect to wifi networks that do not require any configuration. Use the GNOME Settings application to configure connections by using the GNOME application. Use the nm-connection-editor application to configure connections in a graphical user interface. Use the network RHEL system role to automate the configuration of connections on one or multiple hosts. 11.1. Supported wifi security types Depending on the security type a wifi network supports, you can transmit data more or less securely. Warning Do not connect to wifi networks that do not use encryption or which support only the insecure WEP or WPA standards. RHEL 9 supports the following wifi security types: None : Encryption is disabled, and data is transferred in plain text over the network. Enhanced Open : With opportunistic wireless encryption (OWE), devices negotiate unique pairwise master keys (PMK) to encrypt connections in wireless networks without authentication. LEAP : The Lightweight Extensible Authentication Protocol, which was developed by Cisco, is a proprietary version of the extensible authentication protocol (EAP). WPA & WPA2 Personal : In personal mode, the Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access 2 (WPA2) authentication methods use a pre-shared key. WPA & WPA2 Enterprise : In enterprise mode, WPA and WPA2 use the EAP framework and authenticate users to a remote authentication dial-in user service (RADIUS) server. WPA3 Personal : Wi-Fi Protected Access 3 (WPA3) Personal uses simultaneous authentication of equals (SAE) instead of pre-shared keys (PSK) to prevent dictionary attacks. WPA3 uses perfect forward secrecy (PFS). 11.2. Connecting to a wifi network by using nmcli You can use the nmcli utility to connect to a wifi network. When you attempt to connect to a network for the first time, the utility automatically creates a NetworkManager connection profile for it. If the network requires additional settings, such as static IP addresses, you can then modify the profile after it has been automatically created. Prerequisites A wifi device is installed on the host. The wifi device is enabled. To verify, use the nmcli radio command. Procedure If the wifi radio has been disabled in NetworkManager, enable this feature: Optional: Display the available wifi networks: The service set identifier ( SSID ) column contains the names of the networks. If the column shows -- , the access point of this network does not broadcast an SSID. Connect to the wifi network: If you prefer to set the password in the command instead of entering it interactively, use the password <wifi_password> option in the command instead of --ask : Note that, if the network requires static IP addresses, NetworkManager fails to activate the connection at this point. You can configure the IP addresses in later steps. If the network requires static IP addresses: Configure the IPv4 address settings, for example: Configure the IPv6 address settings, for example: Re-activate the connection: Verification Display the active connections: If the output lists the wifi connection you have created, the connection is active. Ping a hostname or IP address: Additional resources nm-settings-nmcli(5) man page on your system 11.3. Connecting to a wifi network by using the GNOME system menu You can use the GNOME system menu to connect to a wifi network. When you connect to a network for the first time, GNOME creates a NetworkManager connection profile for it. If you configure the connection profile to not automatically connect, you can also use the GNOME system menu to manually connect to a wifi network with an existing NetworkManager connection profile. Note Using the GNOME system menu to establish a connection to a wifi network for the first time has certain limitations. For example, you can not configure IP address settings. In this case first configure the connections: In the GNOME settings application In the nm-connection-editor application Using nmcli commands Prerequisites A wifi device is installed on the host. The wifi device is enabled. To verify, use the nmcli radio command. Procedure Open the system menu on the right side of the top bar. Expand the Wi-Fi Not Connected entry. Click Select Network : Select the wifi network you want to connect to. Click Connect . If this is the first time you connect to this network, enter the password for the network, and click Connect . Verification Open the system menu on the right side of the top bar, and verify that the wifi network is connected: If the network appears in the list, it is connected. Ping a hostname or IP address: 11.4. Connecting to a wifi network by using the GNOME settings application You can use the GNOME settings application, also named gnome-control-center , to connect to a wifi network and configure the connection. When you connect to the network for the first time, GNOME creates a NetworkManager connection profile for it. In GNOME settings , you can configure wifi connections for all wifi network security types that RHEL supports. Prerequisites A wifi device is installed on the host. The wifi device is enabled. To verify, use the nmcli radio command. Procedure Press the Super key, type Wi-Fi , and press Enter . Click on the name of the wifi network you want to connect to. Enter the password for the network, and click Connect . If the network requires additional settings, such as static IP addresses or a security type other than WPA2 Personal: Click the gear icon to the network's name. Optional: Configure the network profile on the Details tab to not automatically connect. If you deactivate this feature, you must always manually connect to the network, for example, by using GNOME settings or the GNOME system menu. Configure IPv4 settings on the IPv4 tab, and IPv6 settings on the IPv6 tab. On the Security tab, select the authentication of the network, such as WPA3 Personal , and enter the password. Depending on the selected security, the application shows additional fields. Fill them accordingly. For details, ask the administrator of the wifi network. Click Apply . Verification Open the system menu on the right side of the top bar, and verify that the wifi network is connected: If the network appears in the list, it is connected. Ping a hostname or IP address: 11.5. Configuring a wifi connection by using nmtui The nmtui application provides a text-based user interface for NetworkManager. You can use nmtui to connect to a wifi network. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Procedure Start nmtui : Select Edit a connection , and press Enter . Press the Add button. Select Wi-Fi from the list of network types, and press Enter . Optional: Enter a name for the NetworkManager profile to be created. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Enter the name of the Wi-Fi network, the Service Set Identifier (SSID), into the SSID field. Leave the Mode field set to its default, Client . Select the Security field, press Enter , and set the authentication type of the network from the list. Depending on the authentication type you have selected, nmtui displays different fields. Fill the authentication type-related fields. If the Wi-Fi network requires static IP addresses: Press the Automatic button to the protocol, and select Manual from the displayed list. Press the Show button to the protocol you want to configure to display additional fields, and fill them. Press the OK button to create and automatically activate the new connection. Press the Back button to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Display the active connections: If the output lists the wifi connection you have created, the connection is active. Ping a hostname or IP address: 11.6. Configuring a wifi connection by using nm-connection-editor You can use the nm-connection-editor application to create a connection profile for a wireless network. In this application you can configure all wifi network authentication types that RHEL supports. By default, NetworkManager enables the auto-connect feature for connection profiles and automatically connects to a saved network if it is available. Prerequisites A wifi device is installed on the host. The wifi device is enabled. To verify, use the nmcli radio command. Procedure Open a terminal and enter: Click the + button to add a new connection. Select the Wi-Fi connection type, and click Create . Optional: Set a name for the connection profile. Optional: Configure the network profile on the General tab to not automatically connect. If you deactivate this feature, you must always manually connect to the network, for example, by using GNOME settings or the GNOME system menu. On the Wi-Fi tab, enter the service set identifier (SSID) in the SSID field. On the Wi-Fi Security tab, select the authentication type for the network, such as WPA3 Personal , and enter the password. Depending on the selected security, the application shows additional fields. Fill them accordingly. For details, ask the administrator of the wifi network. Configure IPv4 settings on the IPv4 tab, and IPv6 settings on the IPv6 tab. Click Save . Close the Network Connections window. Verification Open the system menu on the right side of the top bar, and verify that the wifi network is connected: If the network appears in the list, it is connected. Ping a hostname or IP address: 11.7. Configuring a wifi connection with 802.1X network authentication by using the network RHEL system role Network Access Control (NAC) protects a network from unauthorized clients. You can specify the details that are required for the authentication in NetworkManager connection profiles to enable clients to access the network. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client, and then use the network RHEL system role to configure a connection profile with 802.1X network authentication. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The network supports 802.1X network authentication. You installed the wpa_supplicant package on the managed node. DHCP is available in the network of the managed node. The following files required for TLS authentication exist on the control node: The client key is stored in the /srv/data/client.key file. The client certificate is stored in the /srv/data/client.crt file. The CA certificate is stored in the /srv/data/ca.crt file. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.key" dest: "/etc/pki/tls/private/client.key" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.crt" dest: "/etc/pki/tls/certs/client.crt" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/ca.crt" dest: "/etc/pki/ca-trust/source/anchors/ca.crt" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: "Example-wifi" key_mgmt: "wpa-eap" ieee802_1x: identity: <user_name> eap: tls private_key: "/etc/pki/tls/client.key" private_key_password: "{{ pwd }}" private_key_password_flags: none client_cert: "/etc/pki/tls/client.pem" ca_cert: "/etc/pki/tls/cacert.pem" domain_suffix_match: "example.com" The settings specified in the example playbook include the following: ieee802_1x This variable contains the 802.1X-related settings. eap: tls Configures the profile to use the certificate-based TLS authentication method for the Extensible Authentication Protocol (EAP). For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 11.8. Configuring a wifi connection with 802.1X network authentication in an existing profile by using nmcli Using the nmcli utility, you can configure the client to authenticate itself to the network. For example, you can configure Protected Extensible Authentication Protocol (PEAP) authentication with the Microsoft Challenge-Handshake Authentication Protocol version 2 (MSCHAPv2) in an existing NetworkManager wifi connection profile named wlp1s0 . Prerequisites The network must have 802.1X network authentication. The wifi connection profile exists in NetworkManager and has a valid IP configuration. If the client is required to verify the certificate of the authenticator, the Certificate Authority (CA) certificate must be stored in the /etc/pki/ca-trust/source/anchors/ directory. The wpa_supplicant package is installed. Procedure Set the wifi security mode to wpa-eap , the Extensible Authentication Protocol (EAP) to peap , the inner authentication protocol to mschapv2 , and the user name: Note that you must set the wireless-security.key-mgmt , 802-1x.eap , 802-1x.phase2-auth , and 802-1x.identity parameters in a single command. Optional: Store the password in the configuration: Important By default, NetworkManager stores the password in plain text in the /etc/sysconfig/network-scripts/keys- connection_name file, which is readable only by the root user. However, plain text passwords in a configuration file can be a security risk. To increase the security, set the 802-1x.password-flags parameter to agent-owned . With this setting, on servers with the GNOME desktop environment or the nm-applet running, NetworkManager retrieves the password from these services, after you unlock the keyring first. In other cases, NetworkManager prompts for the password. If the client needs to verify the certificate of the authenticator, set the 802-1x.ca-cert parameter in the connection profile to the path of the CA certificate: Note For security reasons, clients should validate the certiciate of the authenticator. Activate the connection profile: Verification Access resources on the network that require network authentication. Additional resources Managing wifi connections nm-settings(5) and nmcli(1) man pages on your system 11.9. Manually setting the wireless regulatory domain On RHEL, a udev rule executes the setregdomain utility to set the wireless regulatory domain. The utility then provides this information to the kernel. By default, setregdomain attempts to determine the country code automatically. If this fails, the wireless regulatory domain setting might be wrong. To work around this problem, you can manually set the country code. Important Manually setting the regulatory domain disables the automatic detection. Therefore, if you later use the computer in a different country, the previously configured setting might no longer be correct. In this case, remove the /etc/sysconfig/regdomain file to switch back to automatic detection or use this procedure to manually update the regulatory domain setting again. Prerequisites The driver of the wifi device supports changing the regulatory domain. Procedure Optional: Display the current regulatory domain settings: Create the /etc/sysconfig/regdomain file with the following content: Set the COUNTRY variable to an ISO 3166-1 alpha2 country code, such as DE for Germany or US for the United States of America. Set the regulatory domain: Verification Display the regulatory domain settings: Additional resources iw(8) , setregdomain(1) , and regulatory.bin(5) man pages on your system ISO 3166 Country Codes
[ "nmcli radio wifi on", "nmcli device wifi list IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 00:53:00:2F:3B:08 Office Infra 44 270 Mbit/s 57 ▂▄▆_ WPA2 WPA3 00:53:00:15:03:BF -- Infra 1 130 Mbit/s 48 ▂▄__ WPA2 WPA3", "nmcli device wifi connect Office --ask Password: wifi-password", "nmcli device wifi connect Office password <wifi_password>", "nmcli connection modify Office ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com", "nmcli connection modify Office ipv6.method manual ipv6.addresses 2001:db8:1::1/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com", "nmcli connection up Office", "nmcli connection show --active NAME ID TYPE DEVICE Office 2501eb7e-7b16-4dc6-97ef-7cc460139a58 wifi wlp0s20f3", "*ping -c 3 example.com", "ping -c 3 example.com", "ping -c 3 example.com", "nmtui", "nmcli connection show --active NAME ID TYPE DEVICE Office 2501eb7e-7b16-4dc6-97ef-7cc460139a58 wifi wlp0s20f3", "ping -c 3 example.com", "nm-connection-editor", "ping -c 3 example.com", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: \"Example-wifi\" key_mgmt: \"wpa-eap\" ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/client.key\" private_key_password: \"{{ pwd }}\" private_key_password_flags: none client_cert: \"/etc/pki/tls/client.pem\" ca_cert: \"/etc/pki/tls/cacert.pem\" domain_suffix_match: \"example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "nmcli connection modify wlp1s0 wireless-security.key-mgmt wpa-eap 802-1x.eap peap 802-1x.phase2-auth mschapv2 802-1x.identity user_name", "nmcli connection modify wlp1s0 802-1x.password password", "nmcli connection modify wlp1s0 802-1x.ca-cert /etc/pki/ca-trust/source/anchors/ca.crt", "nmcli connection up wlp1s0", "iw reg get global country US: DFS-FCC", "COUNTRY= <country_code>", "setregdomain", "iw reg get global country DE: DFS-ETSI" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/assembly_managing-wifi-connections_configuring-and-managing-networking
probe::scheduler.signal_send
probe::scheduler.signal_send Name probe::scheduler.signal_send - Sending a signal Synopsis scheduler.signal_send Values pid pid of the process sending signal name name of the probe point signal_number signal number
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-signal-send
Chapter 6. File Integrity Operator
Chapter 6. File Integrity Operator 6.1. File Integrity Operator overview The File Integrity Operator continually runs file integrity checks on the cluster nodes. It deploys a DaemonSet that initializes and runs privileged Advanced Intrusion Detection Environment (AIDE) containers on each node, providing a log of files that have been modified since the initial run of the DaemonSet pods. For the latest updates, see the File Integrity Operator release notes . Installing the File Integrity Operator Updating the File Integrity Operator Understanding the File Integrity Operator Configuring the Custom File Integrity Operator Performing advanced Custom File Integrity Operator tasks Troubleshooting the File Integrity Operator 6.2. File Integrity Operator release notes The File Integrity Operator for OpenShift Container Platform continually runs file integrity checks on RHCOS nodes. These release notes track the development of the File Integrity Operator in the OpenShift Container Platform. For an overview of the File Integrity Operator, see Understanding the File Integrity Operator . To access the latest release, see Updating the File Integrity Operator . 6.2.1. OpenShift File Integrity Operator 1.3.5 The following advisory is available for the OpenShift File Integrity Operator 1.3.5: RHBA-2024:10366 OpenShift File Integrity Operator Update This update includes upgraded dependencies in underlying base images. 6.2.2. OpenShift File Integrity Operator 1.3.4 The following advisory is available for the OpenShift File Integrity Operator 1.3.4: RHBA-2024:2946 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.2.1. Bug fixes Previously, File Integrity Operator would issue a NodeHasIntegrityFailure alert due to multus certificate rotation. With this release, the alert and failing status are now correctly triggered. ( OCPBUGS-31257 ) 6.2.3. OpenShift File Integrity Operator 1.3.3 The following advisory is available for the OpenShift File Integrity Operator 1.3.3: RHBA-2023:5652 OpenShift File Integrity Operator Bug Fix and Enhancement Update This update addresses a CVE in an underlying dependency. 6.2.3.1. New features and enhancements You can install and use the File Integrity Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see ( Installing the system in FIPS mode ) 6.2.3.2. Bug fixes Previously, some FIO pods with private default mount propagation in combination with hostPath: path: / volume mounts would break the CSI driver relying on multipath. This problem has been fixed and the CSI driver works correctly. ( Some OpenShift Operator pods blocking unmounting of CSI volumes when multipath is in use ) This update resolves CVE-2023-39325. ( CVE-2023-39325 ) 6.2.4. OpenShift File Integrity Operator 1.3.2 The following advisory is available for the OpenShift File Integrity Operator 1.3.2: RHBA-2023:5107 OpenShift File Integrity Operator Bug Fix Update This update addresses a CVE in an underlying dependency. 6.2.5. OpenShift File Integrity Operator 1.3.1 The following advisory is available for the OpenShift File Integrity Operator 1.3.1: RHBA-2023:3600 OpenShift File Integrity Operator Bug Fix Update 6.2.5.1. New features and enhancements FIO now includes kubelet certificates as default files, excluding them from issuing warnings when they're managed by OpenShift Container Platform. ( OCPBUGS-14348 ) FIO now correctly directs email to the address for Red Hat Technical Support. ( OCPBUGS-5023 ) 6.2.5.2. Bug fixes Previously, FIO would not clean up FileIntegrityNodeStatus CRDs when nodes are removed from the cluster. FIO has been updated to correctly clean up node status CRDs on node removal. ( OCPBUGS-4321 ) Previously, FIO would also erroneously indicate that new nodes failed integrity checks. FIO has been updated to correctly show node status CRDs when adding new nodes to the cluster. This provides correct node status notifications. ( OCPBUGS-8502 ) Previously, when FIO was reconciling FileIntegrity CRDs, it would pause scanning until the reconciliation was done. This caused an overly aggressive re-initiatization process on nodes not impacted by the reconciliation. This problem also resulted in unnecessary daemonsets for machine config pools which are unrelated to the FileIntegrity being changed. FIO correctly handles these cases and only pauses AIDE scanning for nodes that are affected by file integrity changes. ( CMP-1097 ) 6.2.5.3. Known Issues In FIO 1.3.1, increasing nodes in IBM Z clusters might result in Failed File Integrity node status. For more information, see Adding nodes in IBM Power clusters can result in failed File Integrity node status . 6.2.6. OpenShift File Integrity Operator 1.2.1 The following advisory is available for the OpenShift File Integrity Operator 1.2.1: RHBA-2023:1684 OpenShift File Integrity Operator Bug Fix Update This release includes updated container dependencies. 6.2.7. OpenShift File Integrity Operator 1.2.0 The following advisory is available for the OpenShift File Integrity Operator 1.2.0: RHBA-2023:1273 OpenShift File Integrity Operator Enhancement Update 6.2.7.1. New features and enhancements The File Integrity Operator Custom Resource (CR) now contains an initialDelay feature that specifies the number of seconds to wait before starting the first AIDE integrity check. For more information, see Creating the FileIntegrity custom resource . The File Integrity Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the File Integrity Operator . 6.2.8. OpenShift File Integrity Operator 1.0.0 The following advisory is available for the OpenShift File Integrity Operator 1.0.0: RHBA-2023:0037 OpenShift File Integrity Operator Bug Fix Update 6.2.9. OpenShift File Integrity Operator 0.1.32 The following advisory is available for the OpenShift File Integrity Operator 0.1.32: RHBA-2022:7095 OpenShift File Integrity Operator Bug Fix Update 6.2.9.1. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand from which namespace the alert originated. Now, the Operator sets the appropriate namespace, providing more information about the alert. ( BZ#2112394 ) Previously, The File Integrity Operator did not update the metrics service on Operator startup, causing the metrics targets to be unreachable. With this release, the File Integrity Operator now ensures the metrics service is updated on Operator startup. ( BZ#2115821 ) 6.2.10. OpenShift File Integrity Operator 0.1.30 The following advisory is available for the OpenShift File Integrity Operator 0.1.30: RHBA-2022:5538 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.10.1. New features and enhancements The File Integrity Operator is now supported on the following architectures: IBM Power IBM Z and LinuxONE 6.2.10.2. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand where the alert originated. Now, the Operator sets the appropriate namespace, increasing understanding of the alert. ( BZ#2101393 ) 6.2.11. OpenShift File Integrity Operator 0.1.24 The following advisory is available for the OpenShift File Integrity Operator 0.1.24: RHBA-2022:1331 OpenShift File Integrity Operator Bug Fix 6.2.11.1. New features and enhancements You can now configure the maximum number of backups stored in the FileIntegrity Custom Resource (CR) with the config.maxBackups attribute. This attribute specifies the number of AIDE database and log backups left over from the re-init process to keep on the node. Older backups beyond the configured number are automatically pruned. The default is set to five backups. 6.2.11.2. Bug fixes Previously, upgrading the Operator from versions older than 0.1.21 to 0.1.22 could cause the re-init feature to fail. This was a result of the Operator failing to update configMap resource labels. Now, upgrading to the latest version fixes the resource labels. ( BZ#2049206 ) Previously, when enforcing the default configMap script contents, the wrong data keys were compared. This resulted in the aide-reinit script not being updated properly after an Operator upgrade, and caused the re-init process to fail. Now, daemonSets run to completion and the AIDE database re-init process executes successfully. ( BZ#2072058 ) 6.2.12. OpenShift File Integrity Operator 0.1.22 The following advisory is available for the OpenShift File Integrity Operator 0.1.22: RHBA-2022:0142 OpenShift File Integrity Operator Bug Fix 6.2.12.1. Bug fixes Previously, a system with a File Integrity Operator installed might interrupt the OpenShift Container Platform update, due to the /etc/kubernetes/aide.reinit file. This occurred if the /etc/kubernetes/aide.reinit file was present, but later removed prior to the ostree validation. With this update, /etc/kubernetes/aide.reinit is moved to the /run directory so that it does not conflict with the OpenShift Container Platform update. ( BZ#2033311 ) 6.2.13. OpenShift File Integrity Operator 0.1.21 The following advisory is available for the OpenShift File Integrity Operator 0.1.21: RHBA-2021:4631 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.2.13.1. New features and enhancements The metrics related to FileIntegrity scan results and processing metrics are displayed on the monitoring dashboard on the web console. The results are labeled with the prefix of file_integrity_operator_ . If a node has an integrity failure for more than 1 second, the default PrometheusRule provided in the operator namespace alerts with a warning. The following dynamic Machine Config Operator and Cluster Version Operator related filepaths are excluded from the default AIDE policy to help prevent false positives during node updates: /etc/machine-config-daemon/currentconfig /etc/pki/ca-trust/extracted/java/cacerts /etc/cvo/updatepayloads /root/.kube The AIDE daemon process has stability improvements over v0.1.16, and is more resilient to errors that might occur when the AIDE database is initialized. 6.2.13.2. Bug fixes Previously, when the Operator automatically upgraded, outdated daemon sets were not removed. With this release, outdated daemon sets are removed during the automatic upgrade. 6.2.14. Additional resources Understanding the File Integrity Operator 6.3. File Integrity Operator support 6.3.1. File Integrity Operator lifecycle The File Integrity Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 6.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 6.4. Installing the File Integrity Operator 6.4.1. Installing the File Integrity Operator using the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the File Integrity Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-file-integrity namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-file-integrity namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-file-integrity project that are reporting issues. 6.4.2. Installing the File Integrity Operator using the CLI Prerequisites You must have admin privileges. Procedure Create a Namespace object YAML file by running: USD oc create -f <file-name>.yaml Example output apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity 1 In OpenShift Container Platform 4.12, the pod security label must be set to privileged at the namespace level. Create the OperatorGroup object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity Create the Subscription object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: "stable" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-file-integrity Verify that the File Integrity Operator is up and running: USD oc get deploy -n openshift-file-integrity 6.4.3. Additional resources The File Integrity Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.5. Updating the File Integrity Operator As a cluster administrator, you can update the File Integrity Operator on your OpenShift Container Platform cluster. 6.5.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 6.5.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 6.5.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.6. Understanding the File Integrity Operator The File Integrity Operator is an OpenShift Container Platform Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods. Important Currently, only Red Hat Enterprise Linux CoreOS (RHCOS) nodes are supported. 6.6.1. Creating the FileIntegrity custom resource An instance of a FileIntegrity custom resource (CR) represents a set of continuous file integrity scans for one or more nodes. Each FileIntegrity CR is backed by a daemon set running AIDE on the nodes matching the FileIntegrity CR specification. Procedure Create the following example FileIntegrity CR named worker-fileintegrity.yaml to enable scans on worker nodes: Example FileIntegrity CR apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: "" tolerations: 2 - key: "myNode" operator: "Exists" effect: "NoSchedule" config: 3 name: "myconfig" namespace: "openshift-file-integrity" key: "config" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7 1 Defines the selector for scheduling node scans. 2 Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration allowing running on main and infra nodes is applied. 3 Define a ConfigMap containing an AIDE configuration to use. 4 The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node might be resource intensive, so it can be useful to specify a longer interval. Default is 900 seconds (15 minutes). 5 The maximum number of AIDE database and log backups (leftover from the re-init process) to keep on a node. Older backups beyond this number are automatically pruned by the daemon. Default is set to 5. 6 The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. 7 The running status of the FileIntegrity instance. Statuses are Initializing , Pending , or Active . Initializing The FileIntegrity object is currently initializing or re-initializing the AIDE database. Pending The FileIntegrity deployment is still being created. Active The scans are active and ongoing. Apply the YAML file to the openshift-file-integrity namespace: USD oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity Verification Confirm the FileIntegrity object was created successfully by running the following command: USD oc get fileintegrities -n openshift-file-integrity Example output NAME AGE worker-fileintegrity 14s 6.6.2. Checking the FileIntegrity custom resource status The FileIntegrity custom resource (CR) reports its status through the . status.phase subresource. Procedure To query the FileIntegrity CR status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status.phase }" Example output Active 6.6.3. FileIntegrity custom resource phases Pending - The phase after the custom resource (CR) is created. Active - The phase when the backing daemon set is up and running. Initializing - The phase when the AIDE database is being reinitialized. 6.6.4. Understanding the FileIntegrityNodeStatuses object The scan results of the FileIntegrity CR are reported in another object called FileIntegrityNodeStatuses . USD oc get fileintegritynodestatuses Example output NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s Note It might take some time for the FileIntegrityNodeStatus object results to be available. There is one result object per node. The nodeName attribute of each FileIntegrityNodeStatus object corresponds to the node being scanned. The status of the file integrity scan is represented in the results array, which holds scan conditions. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq The fileintegritynodestatus object reports the latest status of an AIDE run and exposes the status as Failed , Succeeded , or Errored in a status field. USD oc get fileintegritynodestatuses -w Example output NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded 6.6.5. FileIntegrityNodeStatus CR status types These conditions are reported in the results array of the corresponding FileIntegrityNodeStatus CR status: Succeeded - The integrity check passed; the files and directories covered by the AIDE check have not been modified since the database was last initialized. Failed - The integrity check failed; some files or directories covered by the AIDE check have been modified since the database was last initialized. Errored - The AIDE scanner encountered an internal error. 6.6.5.1. FileIntegrityNodeStatus CR success example Example output of a condition with a success status [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:57Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:46:03Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:48Z" } ] In this case, all three scans succeeded and so far there are no other conditions. 6.6.5.2. FileIntegrityNodeStatus CR failure status example To simulate a failure condition, modify one of the files AIDE tracks. For example, modify /etc/resolv.conf on one of the worker nodes: USD oc debug node/ip-10-0-130-192.ec2.internal Example output Creating debug namespace/openshift-debug-node-ldfbj ... Starting pod/ip-10-0-130-192ec2internal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo "# integrity test" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod ... Removing debug namespace/openshift-debug-node-ldfbj ... After some time, the Failed condition is reported in the results array of the corresponding FileIntegrityNodeStatus object. The Succeeded condition is retained, which allows you to pinpoint the time the check failed. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r Alternatively, if you are not mentioning the object name, run: USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq Example output [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:54:14Z" }, { "condition": "Failed", "filesChanged": 1, "lastProbeTime": "2020-09-15T12:57:20Z", "resultConfigMapName": "aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "resultConfigMapNamespace": "openshift-file-integrity" } ] The Failed condition points to a config map that gives more details about what exactly failed and why: USD oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Example output Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none> Due to the config map data size limit, AIDE logs over 1 MB are added to the failure config map as a base64-encoded gzip archive. Use the following command to extract the log: USD oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip Note Compressed logs are indicated by the presence of a file-integrity.openshift.io/compressed annotation key in the config map. 6.6.6. Understanding events Transitions in the status of the FileIntegrity and FileIntegrityNodeStatus objects are logged by events . The creation time of the event reflects the latest transition, such as Initializing to Active , and not necessarily the latest scan result. However, the newest event always reflects the most recent status. USD oc get events --field-selector reason=FileIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active When a node scan fails, an event is created with the add/changed/removed and config map information. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed Changes to the number of added, changed, or removed files results in a new event, even if the status of the node has not transitioned. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 6.7. Configuring the Custom File Integrity Operator 6.7.1. Viewing FileIntegrity object attributes As with any Kubernetes custom resources (CRs), you can run oc explain fileintegrity , and then look at the individual attributes using: USD oc explain fileintegrity.spec USD oc explain fileintegrity.spec.config 6.7.2. Important attributes Table 6.1. Important spec and spec.config attributes Attribute Description spec.nodeSelector A map of key-values pairs that must match with node's labels in order for the AIDE pods to be schedulable on that node. The typical use is to set only a single key-value pair where node-role.kubernetes.io/worker: "" schedules AIDE on all worker nodes, node.openshift.io/os_id: "rhcos" schedules on all Red Hat Enterprise Linux CoreOS (RHCOS) nodes. spec.debug A boolean attribute. If set to true , the daemon running in the AIDE deamon set's pods would output extra information. spec.tolerations Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration is applied, which allows tolerations to run on control plane nodes. spec.config.gracePeriod The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node can be resource intensive, so it can be useful to specify a longer interval. Defaults to 900 , or 15 minutes. maxBackups The maximum number of AIDE database and log backups leftover from the re-init process to keep on a node. Older backups beyond this number are automatically pruned by the daemon. spec.config.name Name of a configMap that contains custom AIDE configuration. If omitted, a default configuration is created. spec.config.namespace Namespace of a configMap that contains custom AIDE configuration. If unset, the FIO generates a default configuration suitable for RHCOS systems. spec.config.key Key that contains actual AIDE configuration in a config map specified by name and namespace . The default value is aide.conf . spec.config.initialDelay The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. This attribute is optional. 6.7.3. Examine the default configuration The default File Integrity Operator configuration is stored in a config map with the same name as the FileIntegrity CR. Procedure To examine the default config, run: USD oc describe cm/worker-fileintegrity 6.7.4. Understanding the default File Integrity Operator configuration Below is an excerpt from the aide.conf key of the config map: @@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\..* PERMS /hostroot/root/ CONTENT_EX The default configuration for a FileIntegrity instance provides coverage for files under the following directories: /root /boot /usr /etc The following directories are not covered: /var /opt Some OpenShift Container Platform-specific excludes under /etc/ 6.7.5. Supplying a custom AIDE configuration Any entries that configure AIDE internal behavior such as DBDIR , LOGDIR , database , and database_out are overwritten by the Operator. The Operator would add a prefix to /hostroot/ before all paths to be watched for integrity changes. This makes reusing existing AIDE configs that might often not be tailored for a containerized environment and start from the root directory easier. Note /hostroot is the directory where the pods running AIDE mount the host's file system. Changing the configuration triggers a reinitializing of the database. 6.7.6. Defining a custom File Integrity Operator configuration This example focuses on defining a custom configuration for a scanner that runs on the control plane nodes based on the default configuration provided for the worker-fileintegrity CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under /opt/mydaemon on the control plane nodes. Procedure Make a copy of the default configuration. Edit the default configuration with the files that must be watched or excluded. Store the edited contents in a new config map. Point the FileIntegrity object to the new config map through the attributes in spec.config . Extract the default configuration: USD oc extract cm/worker-fileintegrity --keys=aide.conf This creates a file named aide.conf that you can edit. To illustrate how the Operator post-processes the paths, this example adds an exclude directory without the prefix: USD vim aide.conf Example output /hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db Exclude a path specific to control plane nodes: !/opt/mydaemon/ Store the other content in /etc : /hostroot/etc/ CONTENT_EX Create a config map based on this file: USD oc create cm master-aide-conf --from-file=aide.conf Define a FileIntegrity CR manifest that references the config map: apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: "" config: name: master-aide-conf namespace: openshift-file-integrity The Operator processes the provided config map file and stores the result in a config map with the same name as the FileIntegrity object: USD oc describe cm/master-fileintegrity | grep /opt/mydaemon Example output !/hostroot/opt/mydaemon 6.7.7. Changing the custom File Integrity configuration To change the File Integrity configuration, never change the generated config map. Instead, change the config map that is linked to the FileIntegrity object through the spec.name , namespace , and key attributes. 6.8. Performing advanced Custom File Integrity Operator tasks 6.8.1. Reinitializing the database If the File Integrity Operator detects a change that was planned, it might be required to reinitialize the database. Procedure Annotate the FileIntegrity custom resource (CR) with file-integrity.openshift.io/re-init : USD oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init= The old database and log files are backed up and a new database is initialized. The old database and logs are retained on the nodes under /etc/kubernetes , as seen in the following output from a pod spawned using oc debug : Example output ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55 To provide some permanence of record, the resulting config maps are not owned by the FileIntegrity object, so manual cleanup is necessary. As a result, any integrity failures would still be visible in the FileIntegrityNodeStatus object. 6.8.2. Machine config integration In OpenShift Container Platform 4, the cluster node configuration is delivered through MachineConfig objects. You can assume that the changes to files that are caused by a MachineConfig object are expected and should not cause the file integrity scan to fail. To suppress changes to files caused by MachineConfig object updates, the File Integrity Operator watches the node objects; when a node is being updated, the AIDE scans are suspended for the duration of the update. When the update finishes, the database is reinitialized and the scans resume. This pause and resume logic only applies to updates through the MachineConfig API, as they are reflected in the node object annotations. 6.8.3. Exploring the daemon sets Each FileIntegrity object represents a scan on a number of nodes. The scan itself is performed by pods managed by a daemon set. To find the daemon set that represents a FileIntegrity object, run: USD oc -n openshift-file-integrity get ds/aide-worker-fileintegrity To list the pods in that daemon set, run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity To view logs of a single AIDE pod, call oc logs on one of the pods. USD oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6 Example output Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check ... The config maps created by the AIDE daemon are not retained and are deleted after the File Integrity Operator processes them. However, on failure and error, the contents of these config maps are copied to the config map that the FileIntegrityNodeStatus object points to. 6.9. Troubleshooting the File Integrity Operator 6.9.1. General troubleshooting Issue You want to generally troubleshoot issues with the File Integrity Operator. Resolution Enable the debug flag in the FileIntegrity object. The debug flag increases the verbosity of the daemons that run in the DaemonSet pods and run the AIDE checks. 6.9.2. Checking the AIDE configuration Issue You want to check the AIDE configuration. Resolution The AIDE configuration is stored in a config map with the same name as the FileIntegrity object. All AIDE configuration config maps are labeled with file-integrity.openshift.io/aide-conf . 6.9.3. Determining the FileIntegrity object's phase Issue You want to determine if the FileIntegrity object exists and see its current status. Resolution To see the FileIntegrity object's current status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status }" Once the FileIntegrity object and the backing daemon set are created, the status should switch to Active . If it does not, check the Operator pod logs. 6.9.4. Determining that the daemon set's pods are running on the expected nodes Issue You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on. Resolution Run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity Note Adding -owide includes the IP address of the node that the pod is running on. To check the logs of the daemon pods, run oc logs . Check the return value of the AIDE command to see if the check passed or failed.
[ "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-file-integrity", "oc get deploy -n openshift-file-integrity", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7", "oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity", "oc get fileintegrities -n openshift-file-integrity", "NAME AGE worker-fileintegrity 14s", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"", "Active", "oc get fileintegritynodestatuses", "NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "oc get fileintegritynodestatuses -w", "NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]", "oc debug node/ip-10-0-130-192.ec2.internal", "Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj", "oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]", "oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>", "oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip", "oc get events --field-selector reason=FileIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc explain fileintegrity.spec", "oc explain fileintegrity.spec.config", "oc describe cm/worker-fileintegrity", "@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX", "oc extract cm/worker-fileintegrity --keys=aide.conf", "vim aide.conf", "/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db", "!/opt/mydaemon/", "/hostroot/etc/ CONTENT_EX", "oc create cm master-aide-conf --from-file=aide.conf", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity", "oc describe cm/master-fileintegrity | grep /opt/mydaemon", "!/hostroot/opt/mydaemon", "oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=", "ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55", "oc -n openshift-file-integrity get ds/aide-worker-fileintegrity", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6", "Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_and_compliance/file-integrity-operator
5.4. Load Balancing Policy: Evenly_Distributed
5.4. Load Balancing Policy: Evenly_Distributed Figure 5.1. Evenly Distributed Scheduling Policy An evenly distributed load balancing policy selects the host for a new virtual machine according to lowest CPU load or highest available memory. The maximum CPU load and minimum available memory that is allowed for hosts in a cluster for a set amount of time are defined by the evenly distributed scheduling policy's parameters. Beyond these limits the environment's performance will degrade. The evenly distributed policy allows an administrator to set these levels for running virtual machines. If a host has reached the defined maximum CPU load or minimum available memory and the host stays there for more than the set time, virtual machines on that host are migrated one by one to the host in the cluster that has the lowest CPU or highest available memory depending on which parameter is being utilized. Host resources are checked once per minute, and one virtual machine is migrated at a time until the host CPU load is below the defined limit or the host available memory is above the defined limit.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/load_balancing_policy_even_distribution
B.31.3. RHSA-2011:0169 - Critical: java-1.5.0-ibm security and bug fix update
B.31.3. RHSA-2011:0169 - Critical: java-1.5.0-ibm security and bug fix update Updated java-1.5.0-ibm packages that fix multiple security issues and one bug are now available for Red Hat Enterprise Linux 4 Extras, and Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IBM 1.5.0 Java release includes the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. CVE-2010-3553 , CVE-2010-3557 , CVE-2010-3571 This update fixes multiple vulnerabilities in the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. Detailed vulnerability descriptions are linked from the IBM "Security alerts" page. Bug Fix BZ# 659710 An error in the java-1.5.0-ibm RPM spec file caused an incorrect path to be included in HtmlConverter, preventing it from running. All users of java-1.5.0-ibm are advised to upgrade to these updated packages, containing the IBM 1.5.0 SR12-FP3 Java release. All running instances of IBM Java must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0169
Chapter 8. Optional: Customizing boot options
Chapter 8. Optional: Customizing boot options When you are installing RHEL on x86_64 or ARM64 architectures, you can edit the boot options to customize the installation process based on your specific environment. 8.1. Boot options You can append multiple options separated by space to the boot command line. Boot options specific to the installation program always start with inst . The following are the available boot options: Options with an equals "=" sign You must specify a value for boot options that use the = symbol. For example, the inst.vncpassword= option must contain a value, in this example, a password. The correct syntax for this example is inst.vncpassword=password . Options without an equals "=" sign This boot option does not accept any values or parameters. For example, the rd.live.check option forces the installation program to verify the installation media before starting the installation. If this boot option is present, the installation program performs the verification and if the boot option is not present, the verification is skipped. You can customize boot options for a particular menu entry in the following ways: On BIOS-based systems: Press the Tab key and add custom boot options to the command line. You can also access the boot: prompt by pressing the Esc key but no required boot options are preset. In this scenario, you must always specify the Linux option before using any other boot options. For more information, see Editing the boot: prompt in BIOS On UEFI-based systems: Press the e key and add custom boot options to the command line. When ready press Ctrl+X to boot the modified option. For more information, see Editing boot options for the UEFI-based systems 8.2. Editing the boot: prompt in BIOS When using the boot: prompt, the first option must always specify the installation program image file that you want to load. In most cases, you can specify the image using the keyword. You can specify additional options according to your requirements. Prerequisites You have created bootable installation media (USB, CD or DVD). You have booted the installation from the media, and the installation boot menu is open. Procedure With the boot menu open, press the Esc key on your keyboard. The boot: prompt is now accessible. Press the Tab key on your keyboard to display the help commands. Press the Enter key on your keyboard to start the installation with your options. To return from the boot: prompt to the boot menu, restart the system and boot from the installation media again. Additional resources dracut.cmdline(7) man page on your system 8.3. Editing predefined boot options using the > prompt On BIOS-based AMD64 and Intel 64 systems, you can use the > prompt to edit predefined boot options. Prerequisites You have created bootable installation media (USB, CD or DVD). You have booted the installation from the media, and the installation boot menu is open. Procedure From the boot menu, select an option and press the Tab key on your keyboard. The > prompt is accessible and displays the available options. Optional: To view a full set of options, select Test this media and install RHEL 9 . Append the options that you require to the > prompt. For example, to enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140, add fips=1 : Press Enter to start the installation. Press Esc to cancel editing and return to the boot menu. 8.4. Editing boot options for the UEFI-based systems You can edit the GRUB boot menu on UEFI-based systems during a RHEL installation to customize parameters. This allows configuring specific settings ensuring the installation meets their requirements. Prerequisites You have created bootable installation media (USB, CD or DVD). You have booted the installation from the media, and the installation boot menu is open. Procedure From the boot menu window, select the required option and press e . On UEFI systems, the kernel command line starts with linuxefi . Move the cursor to the end of the linuxefi kernel command line. Edit the parameters as required. For example, to enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140, add fips=1 : When you finish editing, press Ctrl + X to start the installation using the specified options. 8.5. Updating drivers during installation You can update drivers during the Red Hat Enterprise Linux installation process. Updating drivers is completely optional. Do not perform a driver update unless it is necessary. Ensure you have been notified by Red Hat, your hardware vendor, or a trusted third-party vendor that a driver update is required during Red Hat Enterprise Linux installation. 8.5.1. Overview Red Hat Enterprise Linux supports drivers for many hardware devices but some newly-released drivers may not be supported. A driver update should only be performed if an unsupported driver prevents the installation from completing. Updating drivers during installation is typically only required to support a particular configuration. For example, installing drivers for a storage adapter card that provides access to your system's storage devices. Warning Driver update disks may disable conflicting kernel drivers. In rare cases, unloading a kernel module may cause installation errors. 8.5.2. Types of driver update Red Hat, your hardware vendor, or a trusted third party provides the driver update as an ISO image file. Once you receive the ISO image file, choose the type of driver update. Types of driver update Automatic In this driver update method; a storage device (including a CD, DVD, or USB flash drive) labeled OEMDRV is physically connected to the system. If the OEMDRV storage device is present when the installation starts, it is treated as a driver update disk, and the installation program automatically loads its drivers. Assisted The installation program prompts you to locate a driver update. You can use any local storage device with a label other than OEMDRV . The inst.dd boot option is specified when starting the installation. If you use this option without any parameters, the installation program displays all of the storage devices connected to the system, and prompts you to select a device that contains a driver update. Manual Manually specify a path to a driver update image or an RPM package. You can use any local storage device with a label other than OEMDRV , or a network location accessible from the installation system. The inst.dd=location boot option is specified when starting the installation, where location is the path to a driver update disk or ISO image. When you specify this option, the installation program attempts to load any driver updates found at the specified location. With manual driver updates, you can specify local storage devices, or a network location (HTTP, HTTPS or FTP server). You can use both inst.dd=location and inst.dd simultaneously, where location is the path to a driver update disk or ISO image. In this scenario, the installation program attempts to load any available driver updates from the location and also prompts you to select a device that contains the driver update. Limitations On UEFI systems with the Secure Boot technology enabled, all drivers must be signed with a valid certificate. Red Hat drivers are signed by one of Red Hat's private keys and authenticated by its corresponding public key in the kernel. If you load additional, separate drivers, verify that they are signed. 8.5.3. Preparing a driver update This procedure describes how to prepare a driver update on a CD and DVD. Prerequisites You have received the driver update ISO image from Red Hat, your hardware vendor, or a trusted third-party vendor. You have burned the driver update ISO image to a CD or DVD. Warning If only a single ISO image file ending in .iso is available on the CD or DVD, the burn process has not been successful. See your system's burning software documentation for instructions on how to burn ISO images to a CD or DVD. Procedure Insert the driver update CD or DVD into your system's CD/DVD drive, and browse it using the system's file manager tool. Verify that a single file rhdd3 is available. rhdd3 is a signature file that contains the driver description and a directory named rpms , which contains the RPM packages with the actual drivers for various architectures. 8.5.4. Performing an automatic driver update This procedure describes how to perform an automatic driver update during installation. Prerequisites You have placed the driver update image on a standard disk partition with an OEMDRV label or burnt the OEMDRV driver update image to a CD or DVD. Advanced storage, such as RAID or LVM volumes, may not be accessible during the driver update process. You have connected a block device with an OEMDRV volume label to your system, or inserted the prepared CD or DVD into your system's CD/DVD drive before starting the installation process. Procedure When you complete the prerequisite steps, the drivers load automatically when the installation program starts and installs during the system's installation process. 8.5.5. Performing an assisted driver update This procedure describes how to perform an assisted driver update during installation. Prerequisites You have connected a block device without an OEMDRV volume label to your system and copied the driver disk image to this device, or you have prepared a driver update CD or DVD and inserted it into your system's CD or DVD drive before starting the installation process. Note If you burn an ISO image file to a CD or DVD but it does not have the OEMDRV volume label, you can use the inst.dd option with no arguments. The installation program provides an option to scan and select drivers from the CD or DVD. In this scenario, the installation program does not prompt you to select a driver update ISO image. Another scenario is to use the CD or DVD with the inst.dd=location boot option; this allows the installation program to automatically scan the CD or DVD for driver updates. For more information, see Performing a manual driver update . Procedure From the boot menu window, press the Tab key on your keyboard to display the boot command line. Append the inst.dd boot option to the command line and press Enter to execute the boot process. From the menu, select a local disk partition or a CD or DVD device. The installation program scans for ISO files, or driver update RPM packages. Optional: Select the driver update ISO file. This step is not required if the selected device or partition contains driver update RPM packages rather than an ISO image file, for example, an optical drive containing a driver update CD or DVD. Select the required drivers. Use the number keys on your keyboard to toggle the driver selection. Press c to install the selected driver. The selected driver is loaded and the installation process starts. 8.5.6. Performing a manual driver update This procedure describes how to perform a manual driver update during installation. Prerequisites You have placed the driver update ISO image file on a USB flash drive or a web server and connected it to your computer. Procedure From the boot menu window, press the Tab key on your keyboard to display the boot command line. Append the inst.dd=location boot option to the command line, where location is a path to the driver update. Typically, the image file is located on a web server, for example, http://server.example.com/dd.iso, or on a USB flash drive, for example, /dev/sdb1 . It is also possible to specify an RPM package containing the driver update, for example http://server.example.com/dd.rpm. Press Enter to execute the boot process. The drivers available at the specified location are automatically loaded and the installation process starts. Additional resources The inst.dd boot option 8.5.7. Disabling a driver This procedure describes how to disable a malfunctioning driver. Prerequisites You have booted the installation program boot menu. Procedure From the boot menu, press the Tab key on your keyboard to display the boot command line. Append the modprobe.blacklist=driver_name boot option to the command line. Replace driver_name with the name of the driver or drivers you want to disable, for example: Drivers disabled using the modprobe.blacklist= boot option remain disabled on the installed system and appear in the /etc/modprobe.d/anaconda-blacklist.conf file. Press Enter to execute the boot process. 8.6. Additional resources For a list of all boot options to customize the installation program's behavior, see Boot options reference .
[ ">vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-5-0-BaseOS-x86_64 rd.live.check quiet fips=1", "linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-4-0-BaseOS-x86_64 rd.live. check quiet fips=1", "modprobe.blacklist=ahci" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/optional-customizing-boot-options_rhel-installer
15.3.4. Installing via NFS
15.3.4. Installing via NFS The NFS dialog applies only if you selected NFS Image in the Installation Method dialog. If you used the repo=nfs boot option, you already specified a server and path. Figure 15.10. NFS Setup Dialog Enter the domain name or IP address of your NFS server in the NFS server name field. For example, if you are installing from a host named eastcoast in the domain example.com , enter eastcoast.example.com . Enter the name of the exported directory in the Red Hat Enterprise Linux 6.9 directory field: If the NFS server is exporting a mirror of the Red Hat Enterprise Linux installation tree, enter the directory which contains the root of the installation tree. If everything was specified properly, a message appears indicating that the installation program for Red Hat Enterprise Linux is running. If the NFS server is exporting the ISO image of the Red Hat Enterprise Linux DVD, enter the directory which contains the ISO image. If you followed the setup described in Section 12.1.2, "Preparing for an NFS Installation" , the exported directory is the one that you specified as publicly_available_directory . Specify any NFS mount options that you require in the NFS mount options field. Refer to the man pages for mount and nfs for a comprehensive list of options. If you do not require any mount options, leave the field empty. Proceed with Chapter 16, Installing Using Anaconda .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-begininstall-nfs-ppc
Chapter 5. Advisories related to this release
Chapter 5. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:4560 RHSA-2024:4561 RHSA-2024:4562 RHSA-2024:4563 Revised on 2024-07-22 15:53:32 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.422/openjdk8-422-advisory_openjdk
Developing Applications with Red Hat build of Apache Camel for Quarkus
Developing Applications with Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel 4.8 Developing Applications with Red Hat build of Apache Camel for Quarkus
[ "<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.15.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> </dependencies> </project>", "import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo?period=1000\") .log(\"Hello World\"); } }", "import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer(\"foo\").period(1000)) .log(\"Hello World\"); } }", "camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named(\"log\") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = \"timer.period\", defaultValue = \"1000\") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF(\"timer:foo?period=%s\", period) .setBody(exchange -> \"Incremented the counter: \" + counter.increment()) .to(\"log:cdi-example?showExchangePattern=false&showBodyType=false\"); } }", "import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject(\"direct:myDirect1\") ProducerTemplate producerTemplate; @EndpointInject(\"direct:myDirect2\") FluentProducerTemplate fluentProducerTemplate; @EndpointInject(\"direct:myDirect3\") DirectEndpoint directEndpoint; @Produce(\"direct:myDirect4\") ProducerTemplate produceProducer; @Produce(\"direct:myDirect5\") FluentProducerTemplate produceProducerFluent; }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce(\"direct:myDirect6\") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello(\"Kermit\") } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named(\"myNamedBean\") @RegisterForReflection public class NamedBean { public String hello(String name) { return \"Hello \" + name + \" from the NamedBean\"; } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:named\") .bean(\"myNamedBean\", \"hello\"); /* ... which is an equivalent of the following: */ from(\"direct:named\") .to(\"bean:myNamedBean?method=hello\"); } }", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier(\"myBeanIdentifier\") @RegisterForReflection public class MyBean { public String hello(String name) { return \"Hello \" + name + \" from MyBean\"; } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:start\") .bean(\"myBeanIdentifier\", \"Camel\"); } }", "import org.apache.camel.Consume; public class Foo { @Consume(\"activemq:cheese\") public void onCheese(String name) { } }", "from(\"activemq:cheese\").bean(\"foo1234\", \"onCheese\")", "curl -s localhost:9000/q/health/live", "curl -s localhost:9000/q/health/ready", "mvn clean compile quarkus:dev", "<dependencies> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.quarkiverse.micrometer.registry</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> </dependencies>", ".to(\"micrometer:counter:org.acme.observability.greeting-provider?tags=type=events,purpose=example\")", "@Inject MeterRegistry registry;", "void countGreeting(Exchange exchange) { registry.counter(\"org.acme.observability.greeting\", \"type\", \"events\", \"purpose\", \"example\").increment(); }", "from(\"platform-http:/greeting\") .removeHeaders(\"*\") .process(this::countGreeting)", "@ApplicationScoped @Named(\"timerCounter\") public class TimerCounter { @Counted(value = \"org.acme.observability.timer-counter\", extraTags = { \"purpose\", \"example\" }) public void count() { } }", ".bean(\"timerCounter\", \"count\")", "curl -s localhost:9000/q/metrics", "curl -s localhost:9000/q/metrics | grep -i 'purpose=\"example\"'", "<dependencies> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.quarkiverse.micrometer.registry</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> </dependencies>", "We are using a property placeholder to be able to test this example in convenient way in a cloud environment quarkus.otel.exporter.otlp.traces.endpoint = http://USD{TELEMETRY_COLLECTOR_COLLECTOR_SERVICE_HOST:localhost}:4317", "docker-compose up -d", "mvn clean package java -jar target/quarkus-app/quarkus-run.jar [io.quarkus] (main) camel-quarkus-examples-... started in 1.163s. Listening on: http://0.0.0.0:8080", "mvn clean package -Pnative ./target/*-runner [io.quarkus] (main) camel-quarkus-examples-... started in 0.013s. Listening on: http://0.0.0.0:8080", "Charset.defaultCharset(), US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16", "quarkus.native.add-all-charsets = true", "quarkus.native.user-country=US quarkus.native.user-language=en", "quarkus.native.resources.includes = docs/*,images/* quarkus.native.resources.excludes = docs/ignored.adoc,images/ignored.png", "onException(MyException.class).handled(true); from(\"direct:route-that-could-produce-my-exception\").throw(MyException.class);", "import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection class MyClassAccessedReflectively { } @RegisterForReflection( targets = { org.third-party.Class1.class, org.third-party.Class2.class } ) class ReflectionRegistrations { }", "quarkus.camel.native.reflection.include-patterns = org.apache.commons.lang3.tuple.* quarkus.camel.native.reflection.exclude-patterns = org.apache.commons.lang3.tuple.*Triple", "quarkus.index-dependency.commons-lang3.group-id = org.apache.commons quarkus.index-dependency.commons-lang3.artifact-id = commons-lang3", "Client side SSL quarkus.cxf.client.hello.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/hello quarkus.cxf.client.hello.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService 1 quarkus.cxf.client.hello.trust-store-type = pkcs12 2 quarkus.cxf.client.hello.trust-store = client-truststore.pkcs12 quarkus.cxf.client.hello.trust-store-password = client-truststore-password", "Server side SSL quarkus.tls.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.key-store.p12.password = localhost-keystore-password quarkus.tls.key-store.p12.alias = localhost quarkus.tls.key-store.p12.alias-password = localhost-keystore-password", "Server keystore for Simple TLS quarkus.tls.localhost-pkcs12.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.localhost-pkcs12.key-store.p12.password = localhost-keystore-password quarkus.tls.localhost-pkcs12.key-store.p12.alias = localhost quarkus.tls.localhost-pkcs12.key-store.p12.alias-password = localhost-keystore-password Server truststore for Mutual TLS quarkus.tls.localhost-pkcs12.trust-store.p12.path = localhost-truststore.pkcs12 quarkus.tls.localhost-pkcs12.trust-store.p12.password = localhost-truststore-password Select localhost-pkcs12 as the TLS configuration for the HTTP server quarkus.http.tls-configuration-name = localhost-pkcs12 Do not allow any clients which do not prove their indentity through an SSL certificate quarkus.http.ssl.client-auth = required CXF service quarkus.cxf.endpoint.\"/mTls\".implementor = io.quarkiverse.cxf.it.auth.mtls.MTlsHelloServiceImpl CXF client with a properly set certificate for mTLS quarkus.cxf.client.mTls.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/mTls quarkus.cxf.client.mTls.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService quarkus.cxf.client.mTls.key-store = target/classes/client-keystore.pkcs12 quarkus.cxf.client.mTls.key-store-type = pkcs12 quarkus.cxf.client.mTls.key-store-password = client-keystore-password quarkus.cxf.client.mTls.key-password = client-keystore-password quarkus.cxf.client.mTls.trust-store = target/classes/client-truststore.pkcs12 quarkus.cxf.client.mTls.trust-store-type = pkcs12 quarkus.cxf.client.mTls.trust-store-password = client-truststore-password Include the keystores in the native executable quarkus.native.resources.includes = *.pkcs12,*.jks", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"HttpsSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken RequireClientCertificate=\"false\" /> </wsp:Policy> </sp:TransportToken> <sp:IncludeTimestamp /> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128 /> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "package io.quarkiverse.cxf.it.security.policy; import jakarta.jws.WebMethod; import jakarta.jws.WebService; import org.apache.cxf.annotations.Policy; /** * A service implementation with a transport policy set */ @WebService(serviceName = \"HttpsPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"https-policy.xml\") public interface HttpsPolicyHelloService extends AbstractHelloService { @WebMethod @Override public String hello(String text); }", "ERROR [org.apa.cxf.ws.pol.PolicyVerificationInInterceptor] Inbound policy verification failed: These policy alternatives can not be satisfied: {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}TransportBinding: TLS is not enabled", "quarkus.cxf.client.basicAuth.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuth.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth quarkus.cxf.client.basicAuth.username = bob quarkus.cxf.client.basicAuth.password = bob234", "quarkus.cxf.client.basicAuthSecureWsdl.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuthSecureWsdl.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuthSecureWsdl quarkus.cxf.client.basicAuthSecureWsdl.username = bob quarkus.cxf.client.basicAuthSecureWsdl.password = USD{client-server.bob.password} quarkus.cxf.client.basicAuthSecureWsdl.secure-wsdl-access = true", "quarkus.http.auth.basic = true quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.alice = alice123 quarkus.security.users.embedded.roles.alice = admin quarkus.security.users.embedded.users.bob = bob234 quarkus.security.users.embedded.roles.bob = app-user", "package io.quarkiverse.cxf.it.auth.basic; import jakarta.annotation.security.RolesAllowed; import jakarta.jws.WebService; import io.quarkiverse.cxf.it.HelloService; @WebService(serviceName = \"HelloService\", targetNamespace = HelloService.NS) @RolesAllowed(\"app-user\") public class BasicAuthHelloServiceImpl implements HelloService { @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"UsernameTokenSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:sp13=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200802\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssUsernameToken11 /> <sp13:Created /> <sp13:Nonce /> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "@WebService(serviceName = \"UsernameTokenPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"username-token-policy.xml\") public interface UsernameTokenPolicyHelloService extends AbstractHelloService { }", "A service with a UsernameToken policy assertion quarkus.cxf.endpoint.\"/helloUsernameToken\".implementor = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloServiceImpl quarkus.cxf.endpoint.\"/helloUsernameToken\".security.callback-handler = #usernameTokenPasswordCallback These properties are used in UsernameTokenPasswordCallback and in the configuration of the helloUsernameToken below wss.user = cxf-user wss.password = secret A client with a UsernameToken policy assertion quarkus.cxf.client.helloUsernameToken.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloUsernameToken quarkus.cxf.client.helloUsernameToken.service-interface = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloService quarkus.cxf.client.helloUsernameToken.security.username = USD{wss.user} quarkus.cxf.client.helloUsernameToken.security.password = USD{wss.password}", "package io.quarkiverse.cxf.it.security.policy; import java.io.IOException; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.UnsupportedCallbackException; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.wss4j.common.ext.WSPasswordCallback; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped @Named(\"usernameTokenPasswordCallback\") /* We refer to this bean by this name from application.properties */ public class UsernameTokenPasswordCallback implements CallbackHandler { /* These two configuration properties are set in application.properties */ @ConfigProperty(name = \"wss.password\") String password; @ConfigProperty(name = \"wss.user\") String user; @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks.length < 1) { throw new IllegalStateException(\"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got array of length \" + callbacks.length); } if (!(callbacks[0] instanceof WSPasswordCallback)) { throw new IllegalStateException( \"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got an instance of \" + callbacks[0].getClass().getName() + \" at possition 0\"); } final WSPasswordCallback pc = (WSPasswordCallback) callbacks[0]; if (user.equals(pc.getIdentifier())) { pc.setPassword(password); } else { throw new IllegalStateException(\"Unexpected user \" + user); } } }", "package io.quarkiverse.cxf.it.security.policy; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class UsernameTokenTest { @CXFClient(\"helloUsernameToken\") UsernameTokenPolicyHelloService helloUsernameToken; @Test void helloUsernameToken() { Assertions.assertThat(helloUsernameToken.hello(\"CXF\")).isEqualTo(\"Hello CXF from UsernameToken!\"); } }", "<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Header> <wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soap:mustUnderstand=\"1\"> <wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" wsu:Id=\"UsernameToken-bac4f255-147e-42a4-aeec-e0a3f5cd3587\"> <wsse:Username>cxf-user</wsse:Username> <wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">secret</wsse:Password> <wsse:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">3uX15dZT08jRWFWxyWmfhg==</wsse:Nonce> <wsu:Created>2024-10-02T17:32:10.497Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> <soap:Body> <ns2:hello xmlns:ns2=\"http://policy.security.it.cxf.quarkiverse.io/\"> <arg0>CXF</arg0> </ns2:hello> </soap:Body> </soap:Envelope>", "export USDCAMEL_VAULT_AWS_ACCESS_KEY=accessKey export USDCAMEL_VAULT_AWS_SECRET_KEY=secretKey export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.accessKey = accessKey camel.vault.aws.secretKey = secretKey camel.vault.aws.region = region", "export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.defaultCredentialsProvider = true camel.vault.aws.region = region", "export USDCAMEL_VAULT_AWS_USE_PROFILE_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_PROFILE_NAME=test-account export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.profileCredentialsProvider = true camel.vault.aws.profileName = test-account camel.vault.aws.region = region", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_GCP_SERVICE_ACCOUNT_KEY=file:////path/to/service.accountkey export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.serviceAccountKey = accessKey camel.vault.gcp.projectId = secretKey", "export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = region", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName", "export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_HASHICORP_TOKEN=token export USDCAMEL_VAULT_HASHICORP_HOST=host export USDCAMEL_VAULT_HASHICORP_PORT=port export USDCAMEL_VAULT_HASHICORP_SCHEME=http/https", "camel.vault.hashicorp.token = token camel.vault.hashicorp.host = host camel.vault.hashicorp.port = port camel.vault.hashicorp.scheme = scheme", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route@2}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:route:default@2}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin@2}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=accessKey export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.useDefaultCredentialProvider = true camel.vault.aws.region = region", "camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true", "{ \"source\": [\"aws.secretsmanager\"], \"detail-type\": [\"AWS API Call via CloudTrail\"], \"detail\": { \"eventSource\": [\"secretsmanager.amazonaws.com\"] } }", "{ \"Policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Id\\\":\\\"<queue_arn>/SQSDefaultPolicy\\\",\\\"Statement\\\":[{\\\"Sid\\\": \\\"EventsToMyQueue\\\", \\\"Effect\\\": \\\"Allow\\\", \\\"Principal\\\": {\\\"Service\\\": \\\"events.amazonaws.com\\\"}, \\\"Action\\\": \\\"sqs:SendMessage\\\", \\\"Resource\\\": \\\"<queue_arn>\\\", \\\"Condition\\\": {\\\"ArnEquals\\\": {\\\"aws:SourceArn\\\": \\\"<eventbridge_rule_arn>\\\"}}}]}\" }", "aws sqs set-queue-attributes --queue-url <queue_url> --attributes file://policy.json", "camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true camel.vault.aws.useSqsNotification=true camel.vault.aws.sqsQueueUrl=<queue_url>", "export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = projectId", "camel.vault.gcp.projectId= projectId camel.vault.gcp.refreshEnabled=true camel.vault.gcp.refreshPeriod=60000 camel.vault.gcp.secrets=hello* camel.vault.gcp.subscriptionName=subscriptionName camel.main.context-reload-enabled = true", "export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName", "export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName", "camel.vault.azure.refreshEnabled=true camel.vault.azure.refreshPeriod=60000 camel.vault.azure.secrets=Secret camel.vault.azure.eventhubConnectionString=eventhub_conn_string camel.vault.azure.blobAccountName=blob_account_name camel.vault.azure.blobContainerName=blob_container_name camel.vault.azure.blobAccessKey=blob_access_key camel.main.context-reload-enabled = true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/%7BLinkCEQReference%7Dextensions-rest
3.12. Network Labels
3.12. Network Labels You can use network labels to simplify several administrative tasks associated with creating and administering logical networks and associating those logical networks with physical host network interfaces and bonds. A network label is a plain text, human readable label that you can attach to a logical network or a physical host network interface. Follow these rules when creating a label: There is no limit on the length of a label. You must use a combination of lowercase and uppercase letters, underscores and hyphens. You cannot use use spaces or special characters. Attaching a label to a logical network or physical host network interface creates an association with other logical networks or physical host network interfaces to which the same label has been attached: Network Label Associations When you attach a label to a logical network, that logical network will be automatically associated with any physical host network interfaces with the given label. When you attach a label to a physical host network interface, any logical networks with the given label will be automatically associated with that physical host network interface. Changing the label attached to a logical network or physical host network interface is the same as removing a label and adding a new label. The association between related logical networks or physical host network interfaces is updated. Network Labels and Clusters When a labeled logical network is added to a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically added to that physical host network interface. When a labeled logical network is detached from a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically detached from that physical host network interface. Network Labels and Logical Networks With Roles When a labeled logical network is assigned to act as a display network or migration network, that logical network is then configured on the physical host network interface using DHCP so that the logical network can be assigned an IP address. Setting a label on a role network (for instance, "a migration network" or "a display network") causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/network_labels
Chapter 4. UserIdentityMapping [user.openshift.io/v1]
Chapter 4. UserIdentityMapping [user.openshift.io/v1] Description UserIdentityMapping maps a user to an identity Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources identity ObjectReference Identity is a reference to an identity kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 user ObjectReference User is a reference to a user 4.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/useridentitymappings POST : create an UserIdentityMapping /apis/user.openshift.io/v1/useridentitymappings/{name} DELETE : delete an UserIdentityMapping GET : read the specified UserIdentityMapping PATCH : partially update the specified UserIdentityMapping PUT : replace the specified UserIdentityMapping 4.2.1. /apis/user.openshift.io/v1/useridentitymappings Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . pretty string If 'true', then the output is pretty printed. HTTP method POST Description create an UserIdentityMapping Table 4.2. Body parameters Parameter Type Description body UserIdentityMapping schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 202 - Accepted UserIdentityMapping schema 401 - Unauthorized Empty 4.2.2. /apis/user.openshift.io/v1/useridentitymappings/{name} Table 4.4. Global path parameters Parameter Type Description name string name of the UserIdentityMapping Table 4.5. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an UserIdentityMapping Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.7. Body parameters Parameter Type Description body DeleteOptions schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserIdentityMapping Table 4.9. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified UserIdentityMapping Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.11. Body parameters Parameter Type Description body Patch schema Table 4.12. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified UserIdentityMapping Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . Table 4.14. Body parameters Parameter Type Description body UserIdentityMapping schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/user_and_group_apis/useridentitymapping-user-openshift-io-v1
Chapter 7. Configuring RBAC policies
Chapter 7. Configuring RBAC policies In Red Hat OpenStack Services on OpenShift (RHOSO) environments, administrators can use role-based access control (RBAC) policies in the Networking service (neutron) to control which projects are granted permission to attach instances to a network, and also access to other resources like QoS policies, security groups, address scopes, subnet pools, and address groups. Important Networking service RBAC is separate from secure role-based access control (SRBAC) that the Identity service (keystone) uses in RHOSO. 7.1. Creating RBAC policies This example procedure demonstrates how to use a Networking service (neutron) role-based access control (RBAC) policy to grant a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. View the list of available networks: USD openstack network list +--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+ View the list of projects: USD openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+ Create a RBAC entry for the web-servers network that grants access to the auditors project ( 4b0b98f8c6c040f38ba4f7146e8680f5 ): USD openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers Sample output +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+ As a result, users in the auditors project can connect instances to the web-servers network. 7.2. Reviewing RBAC policies This example procedure demonstrates how to obtain information about a Networking service (neutron) role-based access control (RBAC) policy used to grant a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: USD openstack network rbac list Sample output +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+ Run the openstack network rbac-show command to view the details of a specific RBAC entry: USD openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709 Sample output +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+ 7.3. Deleting RBAC policies This example procedure demonstrates how to remove a Networking service (neutron) role-based access control (RBAC) policy that grants a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac delete command to delete the RBAC, using the ID of the RBAC that you want to delete: 7.4. Granting RBAC policy access for external networks In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can use a Networking service (neutron) role-based access control (RBAC) policy to grant a project access to an external networks- networks with gateway interfaces attached. In the following example, a RBAC policy is created for the web-servers network and access is granted to the engineering project, c717f263785d4679b16a122516247deb : Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a new RBAC policy using the --action access_as_external option: USD openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers Sample output Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+ As a result, users in the engineering project are able to view the network or connect instances to it: USD openstack network list +--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+ Exit the openstackclient pod: USD exit
[ "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network list", "+--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers", "+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network rbac list", "+--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+", "openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709", "+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+", "openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709 Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709", "oc rsh -n openstack openstackclient", "openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers", "+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+", "openstack network list", "+--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+", "exit" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_networking_services/config-rbac-policies_rhoso-cfgnet
Chapter 8. The rbd kernel module
Chapter 8. The rbd kernel module As a storage administrator, you can access Ceph block devices through the rbd kernel module. You can map and unmap a block device, and displaying those mappings. Also, you can get a list of images through the rbd kernel module. Important Kernel clients on Linux distributions other than Red Hat Enterprise Linux (RHEL) are permitted but not supported. If issues are found in the storage cluster when using these kernel clients, Red Hat will address them, but if the root cause is found to be on the kernel client side, the issue will have to be addressed by the software vendor. Prerequisites A running Red Hat Ceph Storage cluster. 8.1. Create a Ceph Block Device and use it from a Linux kernel module client As a storage administrator, you can create a Ceph Block Device for a Linux kernel module client in the Red Hat Ceph Storage Dashboard. As a system administrator, you can map that block device on a Linux client, and partition, format, and mount it, using the command line. After this, you can read and write files to it. Prerequisites A running Red Hat Ceph Storage cluster. A Red Hat Enterprise Linux client. 8.1.1. Creating a Ceph block device for a Linux kernel module client using dashboard You can create a Ceph block device specifically for a Linux kernel module client using the dashboard web interface by enabling only the features it supports. Kernel module client supports features like Deep flatten, Layering, Exclusive lock, Object map, and Fast diff. Object map, Fast diff, and Deep flatten features require Red Hat Enterprise Linux 8.2 and later. Prerequisites A running Red Hat Ceph Storage cluster. A replicated RBD pool created and enabled. Procedure From the Block drop-down menu, select Images . Click Create . In the Create RBD window, enter a image name, select the RBD enabled pool, select the supported features: Click Create RBD . Verification You will get a notification that the image is created successfully. Additional Resources For more information, see Map and mount a Ceph Block Device on Linux using the command line in the Red Hat Ceph Storage Block Device Guide . For more information, see the Red Hat Ceph Storage Dashboard Guide . 8.1.2. Map and mount a Ceph Block Device on Linux using the command line You can map a Ceph Block Device from a Red Hat Enterprise Linux client using the Linux rbd kernel module. After mapping it, you can partition, format, and mount it, so you can write files to it. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph block device for a Linux kernel module client using the dashboard is created. A Red Hat Enterprise Linux client. Procedure On the Red Hat Enterprise Linux client node, enable the Red Hat Ceph Storage 6 Tools repository: Install the ceph-common RPM package: Copy the Ceph configuration file from a Monitor node to the Client node: Syntax Example Copy the key file from a Monitor node to the Client node: Syntax Example Map the image: Syntax Example Create a partition table on the block device: Syntax Example Create a partition for an XFS file system: Syntax Example Format the partition: Syntax Example Create a directory to mount the new file system on: Syntax Example Mount the file system: Syntax Example Verify that the file system is mounted and showing the correct size: Syntax Example Additional Resources For more information, see Creating a Ceph Block Device for a Linux kernel module client using Dashboard . For more information, see Managing file systems for Red Hat Enterprise Linux 8. For more information, see Storage Administration Guide for Red Hat Enterprise Linux 7. 8.2. Mapping a block device Use rbd to map an image name to a kernel module. You must specify the image name, the pool name and the user name. rbd will load the RBD kernel module if it is not already loaded. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Return a list of the images: Example Following are the two options to map the image: Map an image name to a kernel module: Syntax Example Specify a secret when using cephx authentication by either the keyring or a file containing the secret: Syntax or 8.3. Displaying mapped block devices You can display which block device images are mapped to the kernel module with the rbd command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Display the mapped block devices: 8.4. Unmapping a block device You can unmap a block device image with the rbd command, by using the unmap option and providing the device name. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. An image that is mapped. Procedure Get the specification of the device. Example Unmap the block device image: Syntax Example 8.5. Segregating images within isolated namespaces within the same pool When using Ceph Block Devices directly without a higher-level system, such as OpenStack or OpenShift Container Storage, it was not possible to restrict user access to specific block device images. When combined with CephX capabilities, users can be restricted to specific pool namespaces to restrict access to the images. You can use RADOS namespaces, a new level of identity to identify an object, to provide isolation between rados clients within a pool. For example, a client can only have full permissions on a namespace specific to them. This makes using a different RADOS client for each tenant feasible, which is particularly useful for a block device where many different tenants are accessing their own block device images. You can segregate block device images within isolated namespaces within the same pool. Prerequisites A running Red Hat Ceph Storage cluster. Upgrade all the kernels to 4x and to librbd and librados on all clients. Root-level access to the monitor and client nodes. Procedure Create an rbd pool: Syntax Example Associate the rbd pool with the RBD application: Syntax Example Initialize the pool with the RBD application: Syntax Example Create two namespaces: Syntax Example Provide access to the namespaces for two users: Syntax Example Get the key of the clients: Syntax Example Create the block device images and use the pre-defined namespace within a pool: Syntax Example Optional: Get the details of the namespace and the associated image: Syntax Example Copy the Ceph configuration file from the Ceph Monitor node to the client node: Example Copy the admin keyring from the Ceph Monitor node to the client node: Syntax Example Copy the keyrings of the users from the Ceph Monitor node to the client node: Syntax Example Map the block device image: Syntax Example This does not allow access to users in the other namespaces in the same pool. Example Verify the device: Example
[ "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "scp root@ MONITOR_NODE :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp root@cluster1-node2:/etc/ceph/ceph.conf /etc/ceph/ceph.conf [email protected]'s password: ceph.conf 100% 497 724.9KB/s 00:00", "scp root@ MONITOR_NODE :/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring", "scp root@cluster1-node2:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring [email protected]'s password: ceph.client.admin.keyring 100% 151 265.0KB/s 00:00", "rbd map --pool POOL_NAME IMAGE_NAME --id admin", "rbd map --pool block-device-pool image1 --id admin /dev/rbd0", "parted /dev/ MAPPED_BLOCK_DEVICE mklabel msdos", "parted /dev/rbd0 mklabel msdos Information: You may need to update /etc/fstab.", "parted /dev/ MAPPED_BLOCK_DEVICE mkpart primary xfs 0% 100%", "parted /dev/rbd0 mkpart primary xfs 0% 100% Information: You may need to update /etc/fstab.", "mkfs.xfs /dev/ MAPPED_BLOCK_DEVICE_WITH_PARTITION_NUMBER", "mkfs.xfs /dev/rbd0p1 meta-data=/dev/rbd0p1 isize=512 agcount=16, agsize=163824 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2621184, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0", "mkdir PATH_TO_DIRECTORY", "mkdir /mnt/ceph", "mount /dev/ MAPPED_BLOCK_DEVICE_WITH_PARTITION_NUMBER PATH_TO_DIRECTORY", "mount /dev/rbd0p1 /mnt/ceph/", "df -h PATH_TO_DIRECTORY", "df -h /mnt/ceph/ Filesystem Size Used Avail Use% Mounted on /dev/rbd0p1 10G 105M 9.9G 2% /mnt/ceph", "rbd list", "rbd device map POOL_NAME / IMAGE_NAME --id USER_NAME", "rbd device map rbd/myimage --id admin", "rbd device map POOL_NAME / IMAGE_NAME --id USER_NAME --keyring PATH_TO_KEYRING", "rbd device map POOL_NAME / IMAGE_NAME --id USER_NAME --keyfile PATH_TO_FILE", "rbd device list", "rbd device list", "rbd device unmap /dev/rbd/ POOL_NAME / IMAGE_NAME", "rbd device unmap /dev/rbd/pool1/image1", "ceph osd pool create POOL_NAME PG_NUM", "ceph osd pool create mypool 100 pool 'mypool' created", "ceph osd pool application enable POOL_NAME rbd", "ceph osd pool application enable mypool rbd enabled application 'rbd' on pool 'mypool'", "rbd pool init -p POOL_NAME", "rbd pool init -p mypool", "rbd namespace create --namespace NAMESPACE", "rbd namespace create --namespace namespace1 rbd namespace create --namespace namespace2 rbd namespace ls --format=json [{\"name\":\"namespace2\"},{\"name\":\"namespace1\"}]", "ceph auth get-or-create client. USER_NAME mon 'profile rbd' osd 'profile rbd pool=rbd namespace= NAMESPACE ' -o /etc/ceph/client. USER_NAME .keyring", "ceph auth get-or-create client.testuser mon 'profile rbd' osd 'profile rbd pool=rbd namespace=namespace1' -o /etc/ceph/client.testuser.keyring ceph auth get-or-create client.newuser mon 'profile rbd' osd 'profile rbd pool=rbd namespace=namespace2' -o /etc/ceph/client.newuser.keyring", "ceph auth get client. USER_NAME", "ceph auth get client.testuser [client.testuser] key = AQDMp61hBf5UKRAAgjQ2In0Z3uwAase7mrlKnQ== caps mon = \"profile rbd\" caps osd = \"profile rbd pool=rbd namespace=namespace1\" exported keyring for client.testuser ceph auth get client.newuser [client.newuser] key = AQDfp61hVfLFHRAA7D80ogmZl80ROY+AUG4A+Q== caps mon = \"profile rbd\" caps osd = \"profile rbd pool=rbd namespace=namespace2\" exported keyring for client.newuser", "rbd create --namespace NAMESPACE IMAGE_NAME --size SIZE_IN_GB", "rbd create --namespace namespace1 image01 --size 1G rbd create --namespace namespace2 image02 --size 1G", "rbd --namespace NAMESPACE ls --long", "rbd --namespace namespace1 ls --long NAME SIZE PARENT FMT PROT LOCK image01 1 GiB 2 rbd --namespace namespace2 ls --long NAME SIZE PARENT FMT PROT LOCK image02 1 GiB 2", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE :/etc/ceph/", "scp /etc/ceph/ceph.conf root@host02:/etc/ceph/ root@host02's password: ceph.conf 100% 497 724.9KB/s 00:00", "scp /etc/ceph/ceph.client.admin.keyring root@ CLIENT_NODE :/etc/ceph", "scp /etc/ceph/ceph.client.admin.keyring root@host02:/etc/ceph/ root@host02's password: ceph.client.admin.keyring 100% 151 265.0KB/s 00:00", "scp /etc/ceph/ceph.client. USER_NAME .keyring root@ CLIENT_NODE :/etc/ceph/", "scp /etc/ceph/client.newuser.keyring root@host02:/etc/ceph/ scp /etc/ceph/client.testuser.keyring root@host02:/etc/ceph/", "rbd map --name NAMESPACE IMAGE_NAME -n client. USER_NAME --keyring /etc/ceph/client. USER_NAME .keyring", "rbd map --namespace namespace1 image01 -n client.testuser --keyring=/etc/ceph/client.testuser.keyring /dev/rbd0 rbd map --namespace namespace2 image02 -n client.newuser --keyring=/etc/ceph/client.newuser.keyring /dev/rbd1", "rbd map --namespace namespace2 image02 -n client.testuser --keyring=/etc/ceph/client.testuser.keyring rbd: warning: image already mapped as /dev/rbd1 rbd: sysfs write failed rbd: error asserting namespace: (1) Operation not permitted In some cases useful info is found in syslog - try \"dmesg | tail\". 2021-12-06 02:49:08.106 7f8d4fde2500 -1 librbd::api::Namespace: exists: error asserting namespace: (1) Operation not permitted rbd: map failed: (1) Operation not permitted rbd map --namespace namespace1 image01 -n client.newuser --keyring=/etc/ceph/client.newuser.keyring rbd: warning: image already mapped as /dev/rbd0 rbd: sysfs write failed rbd: error asserting namespace: (1) Operation not permitted In some cases useful info is found in syslog - try \"dmesg | tail\". 2021-12-03 12:16:24.011 7fcad776a040 -1 librbd::api::Namespace: exists: error asserting namespace: (1) Operation not permitted rbd: map failed: (1) Operation not permitted", "rbd showmapped id pool namespace image snap device 0 rbd namespace1 image01 - /dev/rbd0 1 rbd namespace2 image02 - /dev/rbd1" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/block_device_guide/the-rbd-kernel-module
Chapter 8. Fuse Credential Store
Chapter 8. Fuse Credential Store 8.1. Overview Fuse Credential Store feature allows to include passwords and other sensitive strings as masked strings. These strings are resolved from an JBoss EAP Elytron Credential store . The Credential store has built-in support for OSGI environment, specifically for Apache Karaf and for Java system properties. You might have specified passwords, for example javax.net.ssl.keyStorePassword , as system properties in clear text this project allows you to specify these values as references to a credential store. Fuse Credential Store allows to specify the sensitive strings as references to a value stored in Credential Store. The clear text value is replaced with an alias reference, for example CS:alias referencing the value stored under the alias in a configured Credential Store. The convention CS:alias should be followed. The CS: in the Java System property value is a prefix and alias following it will be used for looking up the value. 8.2. Prerequisites The Karaf container is running. 8.3. Setup Fuse Credential Store on Karaf Create a credential store using credential-store:create command: This should the file credential.store which is a JCEKS KeyStore for storing the secrets. Exit the Karaf container: Set the environment variables presented when creating the credential store: Important You are required to set the CREDENTIAL_STORE_* environment variables before starting the Karaf container. Start the Karaf container: Add your secrets to the credential store by using credential-store:store : Exit the Karaf container again: Run the Karaf container again specifying the reference to your secret instead of the value: The value of javax.net.ssl.keyStorePassword when accessed using System::getProperty should contain the string "alias is set" . Note The EXTRA_JAVA_OPTS is one of the many ways to specify system properties. These system properties are defined at the start of the Karaf container. Important When the environment variables are leaked outside of your environment or intended use along with the content of the credential store file, your secretes are compromised. The value of the property when accessed through JMX gets replaced with the string "<sensitive>" , but there are many code paths that lead to System::getProperty , for instance diagnostics or monitoring tools might access it along with any 3rd party software for debugging purposes.
[ "karaf@root()> credential-store:create -a location=credential.store -k password=\"my password\" -k algorithm=masked-MD5-DES In order to use this credential store set the following environment variables Variable | Value ------------------------------------------------------------------------------------------------------------------------ CREDENTIAL_STORE_PROTECTION_ALGORITHM | masked-MD5-DES CREDENTIAL_STORE_PROTECTION_PARAMS | MDkEKXNvbWVhcmJpdHJhcnljcmF6eXN0cmluZ3RoYXRkb2Vzbm90bWF0dGVyAgID6AQIsUOEqvog6XI= CREDENTIAL_STORE_PROTECTION | Sf6sYy7gNpygs311zcQh8Q== CREDENTIAL_STORE_ATTR_location | credential.store Or simply use this: export CREDENTIAL_STORE_PROTECTION_ALGORITHM=masked-MD5-DES export CREDENTIAL_STORE_PROTECTION_PARAMS=MDkEKXNvbWVhcmJpdHJhcnljcmF6eXN0cmluZ3RoYXRkb2Vzbm90bWF0dGVyAgID6AQIsUOEqvog6XI= export CREDENTIAL_STORE_PROTECTION=Sf6sYy7gNpygs311zcQh8Q== export CREDENTIAL_STORE_ATTR_location=credential.store", "karaf@root()> logout", "export CREDENTIAL_STORE_PROTECTION_ALGORITHM=masked-MD5-DES export CREDENTIAL_STORE_PROTECTION_PARAMS=MDkEKXNvbWVhcmJpdHJhcnljcmF6eXN0cmluZ3RoYXRkb2Vzbm90bWF0dGVyAgID6AQIsUOEqvog6XI= export CREDENTIAL_STORE_PROTECTION=Sf6sYy7gNpygs311zcQh8Q== export CREDENTIAL_STORE_ATTR_location=credential.store", "bin/karaf", "karaf@root()> credential-store:store -a javax.net.ssl.keyStorePassword -s \"alias is set\" Value stored in the credential store to reference it use: CS:javax.net.ssl.keyStorePassword", "karaf@root()> logout", "EXTRA_JAVA_OPTS=\"-Djavax.net.ssl.keyStorePassword=CS:javax.net.ssl.keyStorePassword\" bin/karaf" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/credentialstore
3.6. Image Builder blueprint format
3.6. Image Builder blueprint format Image Builder blueprints are stored as plain text in the Tom's Obvious, Minimal Language (TOML) format. The elements of a typical blueprint file include: The blueprint metadata Replace BLUEPRINT-NAME and LONG FORM DESCRIPTION TEXT with a name and description for your blueprint. Replace VERSION with a version number according to the Semantic Versioning scheme. This part is present only once for the whole blueprint file. The entry modules describe the package names and matching version glob to be installed into the image and the entry group describe a group of packages to be installed into the image. If you do not add these items, the blueprint indentify them as an empyt lists. Packages included in the image Replace package-name with name of the package, such as httpd , gdb-doc , or coreutils . Replace package-version with a version to use. This field supports dnf version specifications: For a specific version, use the exact version number such as 7.30 . For latest available version, use the asterisk * . For a latest minor version, use format such as 7.* . Repeat this block for every package to be included.
[ "name = \" BLUEPRINT-NAME \" description = \" LONGER BLUEPRINT DESCRIPTION \" version = \" VERSION \" modules = [] groups = []", "[[packages]] name = \" package-name \" version = \" package-version \"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-Documentation-Image_Builder-Test_Chapter3-Test_Section_6
Chapter 5. Override Ceph behavior
Chapter 5. Override Ceph behavior As a storage administrator, you need to understand how to use overrides for the Red Hat Ceph Storage cluster to change Ceph options during runtime. 5.1. Prerequisites A running Red Hat Ceph Storage cluster. 5.2. Setting and unsetting Ceph override options You can set and unset Ceph options to override Ceph's default behavior. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To override Ceph's default behavior, use the ceph osd set command and the behavior you wish to override: Syntax Once you set the behavior, ceph health will reflect the override(s) that you have set for the cluster. Example To cease overriding Ceph's default behavior, use the ceph osd unset command and the override you wish to cease. Syntax Example Flag Description noin Prevents OSDs from being treated as in the cluster. noout Prevents OSDs from being treated as out of the cluster. noup Prevents OSDs from being treated as up and running. nodown Prevents OSDs from being treated as down . full Makes a cluster appear to have reached its full_ratio , and thereby prevents write operations. pause Ceph will stop processing read and write operations, but will not affect OSD in , out , up or down statuses. nobackfill Ceph will prevent new backfill operations. norebalance Ceph will prevent new rebalancing operations. norecover Ceph will prevent new recovery operations. noscrub Ceph will prevent new scrubbing operations. nodeep-scrub Ceph will prevent new deep scrubbing operations. notieragent Ceph will disable the process that is looking for cold/dirty objects to flush and evict. 5.3. Ceph override use cases noin : Commonly used with noout to address flapping OSDs. noout : If the mon osd report timeout is exceeded and an OSD has not reported to the monitor, the OSD will get marked out . If this happens erroneously, you can set noout to prevent the OSD(s) from getting marked out while you troubleshoot the issue. noup : Commonly used with nodown to address flapping OSDs. nodown : Networking issues may interrupt Ceph 'heartbeat' processes, and an OSD may be up but still get marked down. You can set nodown to prevent OSDs from getting marked down while troubleshooting the issue. full : If a cluster is reaching its full_ratio , you can pre-emptively set the cluster to full and expand capacity. Note Setting the cluster to full will prevent write operations. pause : If you need to troubleshoot a running Ceph cluster without clients reading and writing data, you can set the cluster to pause to prevent client operations. nobackfill : If you need to take an OSD or node down temporarily, for example, upgrading daemons, you can set nobackfill so that Ceph will not backfill while the OSDs is down . norecover : If you need to replace an OSD disk and don't want the PGs to recover to another OSD while you are hotswapping disks, you can set norecover to prevent the other OSDs from copying a new set of PGs to other OSDs. noscrub and nodeep-scrubb : If you want to prevent scrubbing for example, to reduce overhead during high loads, recovery, backfilling, and rebalancing you can set noscrub and/or nodeep-scrub to prevent the cluster from scrubbing OSDs. notieragent : If you want to stop the tier agent process from finding cold objects to flush to the backing storage tier, you may set notieragent .
[ "ceph osd set FLAG", "ceph osd set noout", "ceph osd unset FLAG", "ceph osd unset noout" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/administration_guide/override-ceph-behavior
14.2.2. Starting an OpenSSH Server
14.2.2. Starting an OpenSSH Server In order to run an OpenSSH server, you must have the openssh-server installed (see Section 8.2.4, "Installing Packages" for more information on how to install new packages in Red Hat Enterprise Linux 6). To start the sshd daemon, type the following at a shell prompt: To stop the running sshd daemon, use the following command: If you want the daemon to start automatically at the boot time, type: This will enable the service for levels 2, 3, 4, and 5. For more configuration options, see Chapter 12, Services and Daemons for the detailed information on how to manage services. Note that if you reinstall the system, a new set of identification keys will be created. As a result, clients who had connected to the system with any of the OpenSSH tools before the reinstall will see the following message: To prevent this, you can backup the relevant files from the /etc/ssh/ directory (see Table 14.1, "System-wide configuration files" for a complete list), and restore them whenever you reinstall the system.
[ "~]# service sshd start", "~]# service sshd stop", "~]# chkconfig sshd on", "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-configuration-sshd
Chapter 1. Finding, Running, and Building Containers with podman, skopeo, and buildah
Chapter 1. Finding, Running, and Building Containers with podman, skopeo, and buildah 1.1. Overview Red Hat Enterprise Linux offers a set of container tools to work directly with Linux containers and container images that requires no container engine or docker commands or services. These tools include: podman : The podman command can run and manage containers and container images. It supports the same features and command options you find in the docker command, with the main differences being that podman doesn't require the docker service or any other active container engine for the command to work. Also, podman stores its data in the same directory structure used by Buildah, Skopeo, and CRI-O, which will allow podman to eventually work with containers being actively managed by CRI-O in OpenShift. Podman has a lot of advanced features, such as the support for running containers in Pods. It fully integrates with systemd, including the ability to generate unit files from containers and run systemd within a container. Podman also offers User Namespace support, including running containers without requiring root. skopeo : The skopeo command is a tool for copying containers and images between different types of container storage. It can copy containers from one container registry to another. It can copy images to and from a host, as well as to other container environments and registries. Skopeo can inspect images from container image registries, get images and image layers, and use signatures to create and verify images. buildah : The buildah command allows you to build container images either from command line or using Dockerfiles. These images can then be pushed to any container registry and can be used by any container engine, including Podman, CRI-O, and Docker. The buildah command can be used as a separate command, but is incorporated into other tools as well. For example the podman build command used buildah code to build container images. Buildah is also often used to securely build containers while running inside of a locked down container by a tool like Podman, OpenShift/Kubernetes or Docker. OCI Runtimes : runc : The runc command can be used to start up OCI containers. The following sections describe how to set up and use podman , runc , skopeo , and buildah . 1.2. Running containers as root or rootless Running the container tools described in this chapter as a user with superuser privilege (root user) is the best way to ensure that your containers have full access to any feature available on your system. However, with a new feature called "Rootless Containers," available now as a Technology Preview, you can work with containers as a regular user. Although container engines, such as Docker, let you run docker commands as a regular (non-root) user, the docker daemon that carries out those requests runs as root. So, effectively, regular users can make requests through their containers that harm the system, without there being clarity about who made those requests. By setting up rootless container users, system administrators limit potentially damaging container activities from regular users, while still allowing those users to safely run many container features under their own accounts. This section describes how to set up your system to use container tools (Podman, Skopeo, and Buildah) to work with containers as a non-root user (rootless). It also describes some of the limitations you will encounter because regular user accounts don't have full access to all operating system features that their containers might need to run. 1.2.1. Set up for rootless containers You need to become root user to set up your RHEL system to allow non-root user accounts to use container tools such as podman, skopeo, and buildah, as following: Install RHEL 7.7 : Install or upgrade to RHEL 7.7. Earlier RHEL 7 versions are missing features needed for this procedure. If you are upgrading to RHEL 7.7, continue to "Upgrade to rootless containers" after this procedure is done. Install slirp4netns : Install the slirp4netns package (and also podman, just to get you started): Increase user namespaces : To increase the number of user namespaces in the kernel, type the following: Create the new user account : To create a new user account and add a password for that account (for example, joe), type the following: The user is automatically configured to be able to use rootless podman. Try a podman command : Log in directly as the user you just configured (don't use su or su - to become that user because that doesn't set the correct environment variables) and try to pull and run an image: Check rootless configuration : To check that your rootless configuration is set up properly, you can run commands inside the modified user namespace with the podman unshare command. As the rootless user, the following command lets you see how the uids are assigned to the user namespace: 1.2.2. Upgrade to rootless containers If you have upgraded from RHEL 7.6, you must configure subuid and subgid values manually for any existing user you want to be able to use rootless podman. Using an existing user name and group name (for example, jill), set the range of accessible user and group IDs that can be used for their containers. Here are a couple of warnings: Don't include the rootless user's UID and GID in these ranges If you set multiple rootless container users, use unique ranges for each user We recommend 65536 UIDs and GIDs for maximum compatibility with existing container images, but the number can be reduced Never use UIDs or GIDs under 1000 or reuse UIDs or GIDs from existing user accounts (which, by default, start at 1000) Here is an example: The user/group jill is now allocated 65535 user and group IDs, ranging from 200000-265536. That user should be able to begin running commands to work with containers now. 1.2.3. Special considerations for rootless Here are some things to consider when running containers as a non-root user: As a non-root container user, container images are stored under your home directory ( USDHOME/.local/share/containers/storage/ ), instead of /var/lib/containers . Users running rootless containers are given special permission to run as a range of user and group IDs on the host system. However, they otherwise have no root privileges to the operating system on the host. If you need to configure your rootless container environment, edit configuration files in your home directory ( USDHOME/.config/containers ). Configuration files include storage.conf (for configuring storage) and libpod.conf (for a variety of container settings). You could also create a registries.conf file to identify container registries available when you run podman pull or podman run . For RHEL 7, rootless containers are limited to VFS storage. VFS storage does not support deduplication. So, for example, if you have a 1GB image, then starting a container will result in copying that 1GB again for the container. Starting another container from that image will result in another 1GB of space being used. This limitation is planned to be addressed in future releases by backporting fuse-overlay to the RHEL 7 kernel. A container running as root in a rootless account can turn on privileged features within its own namespace. But that doesn't provide any special privileges to access protected features on the host (beyond having extra UIDs and GIDs). Here are examples of container actions you might expect to work from a rootless account that will not work: Anything you want to access from a mounted directory from the host must be accessible by the UID running your container or your request to access that component will fail. There are some system features you won't be able to change without privilege. For example, you cannot change the system clock by simply setting a SYS_TIME capability inside a container and running the network time service (ntpd). You would have to run that container as root, bypassing your rootless container environment and using the root user's environment, for that capability to work, such as: Note that this example allows ntpd to adjust time for the entire system, and not just within the container. A rootless container has no ability to access a port less than 1024. Inside the rootless container's namespace it can, for example, start a service that exposes port 80 from an httpd service from the container: However, a container would need root privilege, again using the root user's container environment, to expose that port to the host system: An on-going list of shortcomings of running podman and related tools without root privilege is contained in Shortcomings of Rootless Podman . 1.3. Using podman to work with containers The podman command lets you run containers as standalone entities, without requiring that Kubernetes, the Docker runtime, or any other container runtime be involved. It is a tool that can act as a replacement for the docker command, implementing the same command-line syntax, while it adds even more container management features. The podman features include: Based on docker interface : Because podman syntax mirrors the docker command, transitioning to podman should be easy for those familiar with docker . Managing containers and images : Both Docker- and OCI-compatible container images can be used with podman to: Run, stop and restart containers Create and manage container images (push, commit, configure, build, and so on) Working with no runtime : No runtime environment is used by podman to work with containers. Here are a few implementation features of podman you should know about: Podman uses the CRI-O back-end store directory, /var/lib/containers , instead of using the Docker storage location ( /var/lib/docker ), by default. Although podman and CRI-O share the same storage directory, they cannot interact with each other's containers. (Eventually the two features will be able to share containers.) The podman command, like the docker command, can build container images from a Dockerfile. The podman command can be a useful troubleshooting tool when the docker service is unavailable. Options to the docker command that are not supported by podman include container, events, image, network, node, plugin ( podman does not support plugins), port, rename (use rm and create to rename container with podman ), secret, service, stack, swarm ( podman does not support Docker Swarm), system, and volume (for podman , create volumes on the host, then mount in a container). The container and image options are used to run subcommands that are used directly in podman . The following features are currently in development for podman : To interact programmatically with podman , a remote API for Podman is being developed using a technology called varlink . This will let podman listen for API requests from remote tools (such as Cockpit or the atomic command) and respond to them. A feature in development will allow podman to run and manage a Pod (which may consist of multiple containers and some metadata) without Kubernetes or OpenShift being active. (However, podman is not expected to do some of Kubernetes' more advanced features, such as scheduling pods across clusters). Note The podman command is considered to be technology preview for RHEL and RHEL Atomic 7.5.1. 1.3.1. Installing podman To start using podman to work with containers, you can simply install it on a Red Hat Enterprise Linux server system or try it on a RHEL Atomic Host ( podman is preinstalled on RHEL Atomic Host 7.5.1 or later). No container runtime is needed to use podman . To install podman on a RHEL server system, do the following: 1.3.2. Running containers with podman If you are used to using the docker command to work with containers, you will find most of the features and options match those of podman . Table 1 shows a list of commands you can use with podman (type podman -h to see this list): Table 1.1. Commands supported by podman podman command Description podman command Description attach Attach to a running container commit Create new image from changed container build Build an image using Dockerfile instructions create Create, but do not start, a container diff Inspect changes on container's filesystems exec Run a process in a running container export Export container's filesystem contents as a tar archive help, h Shows a list of commands or help for one command history Show history of a specified image images List images in local storage import Import a tarball to create a filesystem image info Display system information inspect Display the configuration of a container or image kill Send a specific signal to one or more running containers load Load an image from an archive login Login to a container registry logout Logout of a container registry logs Fetch the logs of a container mount Mount a working container's root filesystem pause Pauses all the processes in one or more containers ps List containers port List port mappings or a specific mapping for the container pull Pull an image from a registry push Push an image to a specified destination restart Restart one or more containers rm Remove one or more containers from host. Add -f if running. rmi removes one or more images from local storage run run a command in a new container save Save image to an archive search search registry for image start Start one or more containers stats Display percentage of CPU, memory, network I/O, block I/O and PIDs for one or more containers stop Stop one or more containers tag Add an additional name to a local image top Display the running processes of a container umount, unmount Unmount a working container's root filesystem unpause Unpause the processes in one or more containers version Display podman version information 1.3.3. Trying basic podman commands Because the use of podman mirrors the features and syntax of the docker command, you can refer to Working with Docker Formatted Container Images for examples of how to use those options to work with containers. Simply replace docker with podman in most cases. Here are some examples of using podman . 1.3.3.1. Pull a container image to the local system 1.3.3.2. List local container images 1.3.3.3. Run a container image This runs a container image and opens a shell inside the container: 1.3.3.4. List containers that are running or have exited 1.3.3.5. Remove a container or image Remove a container by its container ID: 1.3.3.6. Remove a container image by its image ID or name (use -f to force): 1.3.3.7. Build a container 1.4. Running containers with runc "runC" is a lightweight, portable implementation of the Open Container Initiative (OCI) container runtime specification. runC unites a lot of the low-level features that make running containers possible. It shares a lot of low-level code with Docker but it is not dependent on any of the components of the Docker platform. It supports Linux namespaces, live migration, and has portable performance profiles. It also provides full support for Linux security features such as SELinux, control groups (cgroups), seccomp, and others. You can build and run images with runc, or you can run docker-formatted images with runc. 1.4.1. Installing and running containers The runc package is available for Red Hat Enterprise Linux in the Extras channel. You need to have the Extras channel enabled to install it with yum. If you are using Red Hat Enterprise Linux Atomic Host, the runc package is already included. For a regular RHEL system, to enable the extras repository and install the package, run: With runc, containers are configured using bundles. A bundle for a container is a directory that includes a specification file named "config.json" and a root filesystem. The root filesystem contains the contents of the container. To create a bundle: This command creates a config.json file that only contains a bare-bones structure that you will need to edit. Most importantly, you will need to change the "args" parameter to identify the executable to run. By default, "args" is set to "sh". As an example, you can download the docker-formatted Red Hat Enterprise Linux base image (rhel/rhel7) using docker, then export it, create a new bundle for it with runc, and edit the "config.json" file to point to that image. You can then create the container image and run an instance of that image with runc. Use the following commands: In this example, the name of the container instance is "rhel-container". Running that container, by default, starts a shell, so you can begin looking around and running commands from inside that container. Type exit when you are done. The name of a container instance must be unique on the host. To start a new instance of a container: You can provide the bundle directory using the "-b" option. By default, the value for the bundle is the current directory. You will need root privileges to start containers with runc. To see all commands available to runc and their usage, run "runc --help". 1.5. Using skopeo to work with container registries With the skopeo command, you can work with container images from registries without using the docker daemon or the docker command. Registries can include the Docker Registry, your own local registries, or Atomic registries. Activities you can do with skopeo include: inspect : The output of a skopeo inspect command is similar to what you see from a docker inspect command: low-level information about the container image. That output can be in json format (default) or raw format (using the --raw option). copy : With skopeo copy you can copy a container image from a registry to another registry or to a local directory. layers : The skopeo layers command lets you download the layers associated with images so that they are stored as tarballs and associated manifest files in a local directory. Like the buildah command and other tools that rely on the containers/image library, the skopeo command can work with images from container storage areas other than those associated with Docker. Available transports to other types of container storage include: containers-storage (for images stored by buildah and CRI-O), ostree (for atomic and system containers), oci (for content stored in an OCI-compliant directory), and others. See the skopeo man page for details. To try out skopeo, you could set up a local registry, then run the commands that follow to inspect, copy, and download image layers. If you want to follow along with the examples, start by doing the following: Install a local registry as described in Working with Docker Registries . Pull the latest RHEL 7 image to your local system ( docker pull rhel7/rhel ). Retag the RHEL 7 image and push it to your local registry as follows: The rest of this section describes how to inspect, copy and get layers from the RHEL 7 image. Note The skopeo tool by default requires a TLS connection. It fails when trying to use an unencrypted connection. To override the default and use an http registry, prepend http: to the <registry>/<image> string. 1.5.1. Inspecting container images with skopeo When you inspect a container image from a registry, you need to identify the container format (such as docker), the location of the registry (such as docker.io or localhost:5000), and the repository/image (such as rhel7/rhel). The following example inspects the mariadb container image from the Docker Registry: Assuming you pushed a container image tagged localhost:5000/myrhel7 to a docker registry running on your local system, the following command inspects that image: 1.5.2. Copying container images with skopeo This command copies the myrhel7 container image from a local registry into a directory on the local system: The result of the skopeo copy command is a tarball (16d*.tar) and a manifest.json file representing the image begin copied to the directory you identified. If there were multiple layers, there would be multiple tarballs. The skopeo copy command can also copy images to another registry. If you need to provide a signature to write to the destination registry, you can do that by adding a --sign-by= option to the command line, followed by the required key-id. 1.5.3. Getting image layers with skopeo The skopeo layers command is similar to skopeo copy , with the difference being that the copy option can copy an image to another registry or to a local directory, while the layers option just drops the layers (tarballs and manifest.jason file) in the current directory. For example As you can see from this example, a new directory is created (layers-myrhel7-latest-698503105) and, in this case, a single layer tarball and a manifest.json file are copied to that directory. 1.6. Building container images with Buildah The buildah command lets you create container images from a working container, a Dockerfile, or from scratch. The resulting images are OCI compliant, so they will work on any runtimes that meet the OCI Runtime Specification (such as Docker and CRI-O). This section describes how to use the buildah command to create and otherwise work with containers and container images. 1.6.1. Understanding Buildah Using Buildah is different from building images with the docker command in the following ways: No Daemon! : Buildah bypasses the Docker daemon! So no container runtime (Docker, CRI-O, or other) is needed to use Buildah. Base image or scratch : Lets you not only build an image based on another container, but also lets you start with an empty image (scratch). Build tools external : Doesn't include build tools within the image itself. As a result, Buildah: Reduces the size of images you build Makes the image more secure by not having the software used to build the container (like gcc, make, and dnf) within the resulting image. Creates images that require fewer resources to transport the images (because they are smaller). Buildah is able to operate without Docker or other container runtimes by storing data separately and by including features that let you not only build images, but run those images as containers as well. By default, Buildah stores images in an area identified as containers-storage (/var/lib/containers). When you go to commit a container to an image, you can export that container as a local Docker image by indicating docker-daemon (stored in /var/lib/docker). Note The containers-storage location that the buildah command uses by default is the same place that the CRI-O container runtime uses for storing local copies of images. So images pulled from a registry by either CRI-O or Buildah, or committed by the buildah command, should be visible to both. There are more than a dozen options to use with the buildah command. Some of the main activities you can do with the buildah command include: Build a container from a Dockerfile : Use a Dockerfile to build a new container image ( buildah bud ). Build a container from another image or scratch : Build a new container, starting with an existing base image ( buildah from <imagename> ) or from scratch ( buildah from scratch ) Inspecting a container or image : View metadata associated with the container or image ( buildah inspect ) Mount a container : Mount a container's root filesystem to add or change content ( buildah mount ). Create a new container layer : Use the updated contents of a container's root filesystem as a filesystem layer to commit content to a new image ( buildah commit ). Unmount a container : Unmount a mounted container ( buildah umount ). Delete a container or an image : Remove a container ( buildah rm ) or a container image ( buildah rmi ). The buildah package is technology preview for Red Hat Enterprise Linux version 7.4.4. For more details on Buildah, see the GitHub Buildah page . The GitHub Buildah site includes man pages and software that might be more recent than is available with the RHEL version. Here are some other articles on Buildah that might interest you: Buildah Tutorial 1: Building OCI container images Buildah Tutorial 2: Using Buildah with container registries Buildah Blocks - Getting Fit 1.6.2. Installing Buildah The buildah package is available from the Red Hat Enterprise Linux Server Extras repository. From a RHEL Server system with a valid subscription, install the buildah package as follows: With the buildah package installed, you can refer to the man pages included with the buildah package for details on how to use it. To see the available man pages and other documentation, type: The following sections describe how to use buildah to get containers, build a container from a Dockerfile, build one from scratch, and manage containers in various ways. 1.6.3. Getting Images with buildah To get a container image to use with buildah , use the buildah from command. Here's how to get a RHEL 7 image from the Red Hat Registry as a working container to use with the buildah command: Notice that the result of the buildah from command is an image (registry.access.redhat.com/rhel7/rhel-minimal:latest) and a working container that is ready to run from that image (rhel-minimal-working-container). Here's an example of how to execute a command from that container: The image and container are now ready for use with Buildah. 1.6.4. Building an Image from a Dockerfile with Buildah With the buildah command, you can create a new image from a Dockerfile. The following steps show how to build an image that includes a simple script that is executed when the image is run. This simple example starts with two files in the current directory: Dockerfile (which holds the instructions for building the container image) and myecho (a script that echoes a few words to the screen): With the Dockerfile in the current directory, build the new container as follows: The buildah bud command creates a new image named myecho, but doesn't create a working container, as demonstrated when you run buildah containers below: , you can make the image into a container and run it, to make sure it is working. 1.6.5. Running a Container with Buildah To check that the image you built previously works, you need to create a working container from the image, then use buildah run to run the working container. The steps just shown used the image (myecho) to create a container (myecho-working-container). After that, buildah containers showed the container exists and buildah run ran the container, producing the output: This container works! 1.6.6. Inspecting a Container with buildah With buildah inspect , you can show information about a container or image. For example, to inspect the myecho image you created earlier, type: To inspect a container from that same image, type the following: Note that the container output has added information, such as the container name, container id, process label, and mount label to what was in the image. 1.6.7. Modifying a Container to Create a new Image with Buildah There are several ways you can modify an existing container with the buildah command and commit those changes to a new container image: Mount a container and copy files to it Use buildah copy and buildah config to modify a container Once you have modified the container, use buildah commit to commit the changes to a new image. 1.6.7.1. Using buildah mount to Modify a Container After getting an image with buildah from , you can use that image as the basis for a new image. The following text shows how to create a new image by mounting a working container, adding files to that container, then committing the changes to a new image. Type the following to view the working container you used earlier: Mount the container image and set the mount point to a variable (USDmymount) to make it easier to deal with: Add content to the script created earlier in the mounted container: To commit the content you added to create a new image (named myecho), type the following: To check that the new image includes your changes, create a working container and run it: You can see that the new echo command added to the script displays the additional text. When you are done, you can unmount the container: 1.6.7.2. Using buildah copy and buildah config to Modify a Container With buildah copy , you can copy files to a container without mounting it first. Here's an example, using the myecho-working-container created (and unmounted) in the section, to copy a new script to the container and change the container's configuration to run that script by default. Create a script called newecho and make it executable: Create a new working container: Copy newecho to /usr/local/bin inside the container: Change the configuration to use the newecho script as the new entrypoint: Run the new container, which should result in the newecho command being executed: If the container behaved as you expected it would, you could then commit it to a new image (mynewecho): 1.6.8. Creating images from scratch with Buildah Instead of starting with a base image, you can create a new container that holds no content and only a small amount of container metadata. This is referred to as a scratch container. Here are a few issues to consider when choosing to create an image starting from a scratch container with the buildah command: With a scratch container, you can simply copy executables that have no dependencies to the scratch image and make a few configuration settings to get a minimal container to work. To use tools like yum or rpm packages to populate the scratch container, you need to at least initialize an RPM database in the container and add a release package. The example below shows how to do that. If you end up adding a lot of RPM packages, consider using the rhel or rhel-minimal base images instead of a scratch image. Those base images have had documentation, language packs, and other components trimmed out, which can ultimately result in your image being smaller. This example adds a Web service (httpd) to a container and configures it to run. In the example, instead of committing the image to Buildah (containers-storage which stores locally in /var/lib/containers), we illustrate how to commit the image so it can be managed by the local Docker service (docker-daemon which stores locally in /var/lib/docker). You could just have easily committed it to Buildah, which would let you then push it to a Docker service (docker), a local OSTree repository (ostree), or other OCI-compliant storage (oci). (Type man buildah push for details.) To begin, create a scratch container: This creates just an empty container (no image) that you can mount as follows: Initialize an RPM database within the scratch image and add the redhat-release package (which includes other files needed for RPMs to work): Install the httpd service to the scratch directory: Add some text to an index.html file in the container, so you will be able to test it later: Instead of running httpd as an init service, set a few buildah config options to run the httpd daemon directly from the container: By default, the buildah commit command adds the docker.io repository name to the image name and copies the image to the storage area for your local Docker service (/var/lib/docker). For now, you can use the Image ID to run the new image as a container with the docker command: 1.6.9. Removing Images or Containers with Buildah When you are done with particular containers or images, you can remove them with buildah rm or buildah rmi , respectively. Here are some examples. To remove the container created in the section, you could type the following to see the mounted container, unmount it and remove it: To remove the image you created previously, you could type the following: 1.6.10. Using container registries with Buildah With Buildah, you can push and pull container images between your local system and public or private container registries. The following examples show how to: Push containers to and pull them from a private registry with buildah. Push and pull container between your local system and the Docker Registry. Use credentials to associated you containers with a registry account when you push them. Use the skopeo command, in tandem with the buildah command, to query registries for information about container images. 1.6.10.1. Pushing containers to a private registry Pushing containers to a private container registry with the buildah command works much the same as pushing containers with the docker command. You need to: Set up a private registry (OpenShift provides a container registry or you can set up a simple registry with the docker-distribution package, as shown below). Create or acquire the container image you want to push. Use buildah push to push the image to the registry. To install a registry on your local system, start it up, and enable it to start on boot, type: By default, the docker-distribution service listens on TCP port 5000 on your localhost. To push an image from your local Buildah container storage, check the image name, then push it it using the buildah push command. Remember to identify both the local image name and a new name that includes the location (localhost:5000, in this case): Use the curl command to list the images in the registry and skopeo to inspect metadata about the image: At this point, any tool that can pull container images from a container registry can get a copy of your pushed image. For example, you could start the docker daemon and try to pull the image so it can be used by the docker command as follows: 1.6.10.2. Pushing containers to the Docker Hub You can use your Docker Hub credentials to push and pull images from the Docker Hub with the buildah command. For this example, replace the username and password (testaccountXX:My00P@sswd) with your own Docker Hub credentials: As with the private registry, you can then get and run the container from the Docker Hub with either the buildah or docker command:
[ "yum install slirp4netns podman -y", "echo \"user.max_user_namespaces=28633\" > /etc/sysctl.d/userns.conf sysctl -p /etc/sysctl.d/userns.conf", "useradd -c \"Joe Jones\" joe passwd joe", "podman pull ubi7/ubi podman run ubi7/ubi cat /etc/os-release NAME=\"Red Hat Enterprise Linux Server\" VERSION=\"7.7 (Maipo)\"", "podman unshare cat /proc/self/uid_map 0 1001 1 1 100000 65536 65537 165536 65536", "echo \"jill:200000:65536\" >> /etc/subuid echo \"jill:200000:65536\" >> /etc/subgid", "sudo podman run -d --cap-add SYS_TIME ntpd", "podman run -d httpd", "sudo podman run -d -p 80:80 httpd", "subscription-manager repos --disable='*' subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --enable=rhel-7-server-optional-rpms yum install podman -y", "podman pull registry.access.redhat.com/rhel7/rhel Trying to pull registry.access.redhat...Getting image source signatures Copying blob sha256:d1fe25896eb5cbcee Writing manifest to image destination Storing signatures fd1ba0b398a82d56900bb798c", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/rhel7/rhel-minimal latest de9c26f23799 5 weeks ago 80.1MB registry.access.redhat.com/rhel7/rhel latest fd1ba0b398a8 5 weeks ago 211MB", "podman run -it registry.access.redhat.com/rhel7/rhel /bin/bash ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 13:48 pts/0 00:00:00 /bin/bash root 21 1 0 13:49 pts/0 00:00:00 ps -ef exit #", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED AT STATUS PORTS NAMES 440becd26893 registry.access.redhat.com/rhel7/rhel-minimal:latest /bin/bash 2018-05-10 09:02:52 -0400 EDT Exited (0) About an hour ago happy_hodgkin 8414218c04f9 registry.access.redhat.com/rhel7/rhel:latest /bin/bash 2018-05-10 09:48:07 -0400 EDT Exited (0) 14 minutes ago nostalgic_boyd", "podman rm 440becd26893", "podman rmi registry.access.redhat.com/rhel7/rhel-minimal podman rmi de9c26f23799 podman rmi -f registry.access.redhat.com/rhel7/rhel:latest", "cat Dockerfile FROM registry.access.redhat.com/rhel7/rhel-minimal ENTRYPOINT \"echo \"Podman build this container.\" podman build -t podbuilt . STEP 1: FROM registry.access Writing manifest to image destination Storing signatures 91e043c11617c08d4f8 podman run podbuilt Podman build this container.", "sudo subscription-manager repos --enable=rhel-7-server-extras-rpms sudo yum install runc", "runc spec", "\"args\": [ \"sh\" ],", "sudo docker pull registry.access.redhat.com/rhel7/rhel sudo docker export USD(docker create registry.access.redhat.com/rhel7/rhel) > rhel.tar mkdir -p rhel-runc/rootfs tar -C rhel-runc/rootfs -xf rhel.tar runc spec -b rhel-runc vi rhel-runc/config.json Change the value of terminal from *false* to *true* sudo runc create -b rhel-runc/ rhel-container sudo runc start rhel-container sh-4.2#", "runc start <container_name>", "sudo docker tag rhel7/rhel localhost:5000/myrhel7 sudo docker push localhost:5000/myrhel7", "sudo skopeo inspect docker://docker.io/library/mariadb { \"Name\": \"docker.io/library/mariadb\", \"Tag\": \"latest\", \"Digest\": \"sha256:d3f56b143b62690b400ef42e876e628eb5e488d2d0d2a35d6438a4aa841d89c4\", \"RepoTags\": [ \"10.0.15\", \"10.0.16\", \"10.0.17\", \"10.0.19\", \"Created\": \"2016-06-10T01:53:48.812217692Z\", \"DockerVersion\": \"1.10.3\", \"Labels\": {}, \"Architecture\": \"amd64\", \"Os\": \"linux\", \"Layers\": [", "sudo skopeo inspect docker://localhost:5000/myrhel7 { \"Name\": \"localhost:5000/myrhel7\", \"Tag\": \"latest\", \"Digest\": \"sha256:4e09c308a9ddf56c0ff6e321d135136eb04152456f73786a16166ce7cba7c904\", \"RepoTags\": [ \"latest\" ], \"Created\": \"2016-06-16T17:27:13Z\", \"DockerVersion\": \"1.7.0\", \"Labels\": { \"Architecture\": \"x86_64\", \"Authoritative_Registry\": \"registry.access.redhat.com\", \"BZComponent\": \"rhel-server-docker\", \"Build_Host\": \"rcm-img01.build.eng.bos.redhat.com\", \"Name\": \"rhel7/rhel\", \"Release\": \"75\", \"Vendor\": \"Red Hat, Inc.\", \"Version\": \"7.2\" }, \"Architecture\": \"amd64\", \"Os\": \"linux\", \"Layers\": [ \"sha256:16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf\" ] }", "skopeo copy docker://localhost:5000/myrhel7 dir:/root/test/ INFO[0000] Downloading myrhel7/blobs/sha256:16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf ls /root/test 16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf.tar manifest.json", "skopeo layers docker://localhost:5000/myrhel7 INFO[0000] Downloading myrhel7/blobs/sha256:16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf find . ./layers-myrhel7-latest-698503105 ./layers-myrhel7-latest-698503105/manifest.json ./layers-myrhel7-latest-698503105/16dc1f96e3a1bb628be2e00518fec2bb97bd5933859de592a00e2eb7774b6ecf.tar", "subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms yum -y install buildah", "rpm -qd buildah", "buildah from docker://registry.access.redhat.com/rhel7/rhel-minimal Getting image source signatures Copying blob... Writing manifest to image destination Storing signatures rhel-minimal-working-container buildah images IMAGE ID IMAGE NAME CREATED AT SIZE 1456eedf8101 registry.access.redhat.com/rhel7/rhel-atomic:latest Oct 12, 2017 15:15 74.77 MB buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME dc8f21Ag4a47 * 1456eedf8101 registry.access.redhat.com/rhel7/rhel-atomic:latest rhel-atomic-working-container 1456eedf8101 registry.access.redhat.com/rhel7/rhel-minimal:latest Oct 12, 2017 15:15 74.77 MB buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME dc8f21Ag4a47 * 1456eedf8101 registry.access.redhat.com/rhel7/rhel-minimal:latest rhel-minimal-working-container", "buildah run rhel-minimal-working-container cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.4 (Maipo)", "ls Dockerfile myecho cat Dockerfile FROM registry.access.redhat.com/rhel7/rhel-minimal ADD myecho /usr/local/bin ENTRYPOINT \"/usr/local/bin/myecho\" cat myecho echo \"This container works!\" chmod 755 myecho", "buildah bud -t myecho . STEP 1: FROM registry.access.redhat.com/rhel7/rhel-minimal STEP 2: ADD myecho /usr/local/bin STEP 3: ENTRYPOINT \"/usr/local/bin/myecho\" STEP 4: COMMIT containers-storage:[devicemapper@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/myecho:latest", "buildah images IMAGE ID IMAGE NAME CREATED AT SIZE 1456eedf8101 registry.access.redhat.com/rhel7/rhel-minimal:latest Oct 12, 2017 15:15 74.77 MB ab230ac5aba3 docker.io/library/myecho:latest Oct 12, 2017 15:15 2.854 KB buildah containers", "buildah from myecho myecho-working-container buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME dc8f21af4a47 * 1456eedf8101 registry.access.redhat.com/rhel7/rhel-minimal:latest rhel-minimal-working-container 6d1ffccb557d * ab230ac5aba3 docker.io/library/myecho:latest myecho-working-container buildah run myecho-working-container This container works!", "buildah inspect myecho | less { \"type\": \"buildah 0.0.1\", \"image\": \"docker.io/library/myecho:latest\", \"image-id\": \"e2b190ac8a37737ec03cfa4c9bfd989845b9bec3aa81ff48d8350d7418d748f6\", \"config\": \"eyJjcmVh \"ociv1\": { \"created\": \"2017-10-12T15:15:00.207103Z\", \"author\": \"Red Hat, Inc.\", \"architecture\": \"amd64\", \"os\": \"linux\", \"config\": { \"Entrypoint\": [ \"/bin/sh\", \"-c\", \"\\\"/usr/local/bin/myecho\\\"\" ], \"WorkingDir\": \"/\", \"Labels\": { \"architecture\": \"x86_64\", \"authoritative-source-url\": \"registry.access.redhat.com\",", "buildah inspect myecho-working-container | less { \"type\": \"buildah 0.0.1\", \"image\": \"docker.io/library/myecho:latest\", \"image-id\": \"e2b190ac8a37737ec03cfa4c9bfd989845b9bec3aa81ff48d8350d7418d748f6\", \"config\": \"eyJjcmV \"container-name\": \"myecho-working-container\", \"container-id\": \"70f22e886310bba26bb57ca7afa39fd19af2791c4c66067cb6206b7c3ebdcd20\", \"process-label\": \"system_u:system_r:svirt_lxc_net_t:s0:c225,c716\", \"mount-label\": \"system_u:object_r:svirt_sandbox_file_t:s0:c225,c716\", \"ociv1\": { \"created\": \"2017-10-12T15:15:00.207103Z\", \"author\": \"Red Hat, Inc.\", \"architecture\": \"amd64\",", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME dc8f21af4a47 * 1456eedf8101 registry.access.redhat.com/rhel7/rhel-minimal:latest rhel-minimal-working-container 6d1ffccb557d * ab230ac5aba3 docker.io/library/myecho:latest myecho-working-container", "mymount=USD(buildah mount myecho-working-container) echo USDmymount /var/lib/containers/storage/devicemapper/mnt/176c273fe28c23e5319805a2c48559305a57a706cc7ae7bec7da4cd79edd3c02/rootfs", "echo 'echo \"We even modified it.\"' >> USDmymount/usr/local/bin/myecho", "buildah commit myecho-working-container containers-storage:myecho2", "buildah images IMAGE ID IMAGE NAME CREATED AT SIZE a7e06d3cd0e2 docker.io/library/myecho2:latest Oct 12, 2017 15:15 3.144 KB buildah from docker.io/library/myecho2:latest myecho2-working-container buildah run myecho2-working-container This container works! We even modified it.", "buildah umount myecho-working-container", "cat newecho echo \"I changed this container\" chmod 755 newecho", "buildah from myecho:latest myecho-working-container-2", "buildah copy myecho-working-container-2 newecho /usr/local/bin", "buildah config myecho-working-container-2 --entrypoint \"/bin/sh -c /usr/local/bin/newecho\"", "buildah run myecho-working-container-2 I changed this container", "buildah commit myecho-working-container-2 containers-storage:mynewecho", "buildah from scratch working-container", "scratchmnt=USD(buildah mount working-container) echo USDscratchmnt /var/lib/containers/storage/devicemapper/mnt/cc92011e9a2b077d03a97c0809f1f3e7fef0f29bdc6ab5e86b85430ec77b2bf6/rootfs", "rpm --root USDscratchmnt --initdb yum install yum-utils (if not already installed) yumdownloader --destdir=/tmp redhat-release-server rpm --root USDscratchmnt -ihv /tmp/redhat-release-server*.rpm", "yum install -y --installroot=USDscratchmnt httpd", "echo \"Your httpd container from scratch worked.\" > USDscratchmnt/var/www/html/index.html", "buildah config --cmd \"/usr/sbin/httpd -DFOREGROUND\" working-container buildah config --port 80/tcp working-container buildah commit working-container docker-daemon:myhttpd:latest", "docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/myhttpd latest 47c0795d7b0e 9 minutes ago 665.6 MB docker run -p 8080:80 -d --name httpd-server 47c0795d7b0e curl localhost:8080 Your httpd container from scratch worked.", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 05387e29ab93 * c37e14066ac7 docker.io/library/myecho:latest myecho-working-container buildah mount 05387e29ab93 /var/lib/containers/storage/devicemapper/mnt/9274181773a.../rootfs buildah umount 05387e29ab93 buildah rm 05387e29ab93 05387e29ab93151cf52e9c85c573f3e8ab64af1592b1ff9315db8a10a77d7c22", "buildah rmi docker.io/library/myecho:latest untagged: docker.io/library/myecho:latest ab230ac5aba3b5a0a7c3d2c5e0793280c1a1b4d2457a75a01b70a4b7a9ed415a", "yum install -y docker-distribution systemctl start docker-distribution systemctl enable docker-distribution", "buildah images IMAGE ID IMAGE NAME CREATED AT SIZE cb702d492ee9 docker.io/library/myecho2:latest Nov 21, 2017 16:50 3.143 KB buildah push --tls-verify=false myecho2:latest localhost:5000/myecho2:latest Getting image source signatures Copying blob sha256:e4efd0 Writing manifest to image destination Storing signatures", "curl http://localhost:5000/v2/_catalog {\"repositories\":[\"myatomic\",\"myecho2\"]} curl http://localhost:5000/v2/myecho2/tags/list {\"name\":\"myecho2\",\"tags\":[\"latest\"]} skopeo inspect --tls-verify=false docker://localhost:5000/myecho2:latest | less { \"Name\": \"localhost:5000/myecho2\", \"Digest\": \"sha256:8999ff6050...\", \"RepoTags\": [ \"latest\" ], \"Created\": \"2017-11-21T16:50:25.830343Z\", \"DockerVersion\": \"\", \"Labels\": { \"architecture\": \"x86_64\", \"authoritative-source-url\": \"registry.access.redhat.com\",", "systemctl start docker docker pull localhost:5000/myecho2 docker run localhost:5000/myecho2 This container works!", "buildah push --creds testaccountXX:My00P@sswd docker.io/library/myecho2:latest docker://testaccountXX/myecho2:latest", "docker run docker.io/textaccountXX/myecho2:latest This container works! buildah from docker.io/textaccountXX/myecho2:latest myecho2-working-container-2 buildah run myecho2-working-container-2 This container works!" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/finding_running_and_building_containers_with_podman_skopeo_and_buildah
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/getting_started_with_camel_extensions_for_quarkus/pr01
22.2.2. Command Line Configuration
22.2.2. Command Line Configuration Samba uses /etc/samba/smb.conf as its configuration file. If you change this configuration file, the changes do not take effect until you restart the Samba daemon with the command service smb restart . To specify the Windows workgroup and a brief description of the Samba server, edit the following lines in your smb.conf file: Replace WORKGROUPNAME with the name of the Windows workgroup to which this machine should belong. The BRIEF COMMENT ABOUT SERVER is optional and is used as the Windows comment about the Samba system. To create a Samba share directory on your Linux system, add the following section to your smb.conf file (after modifying it to reflect your needs and your system): The above example allows the users tfox and carole to read and write to the directory /home/share , on the Samba server, from a Samba client.
[ "workgroup = WORKGROUPNAME server string = BRIEF COMMENT ABOUT SERVER", "[ sharename ] comment = Insert a comment here path = /home/share/ valid users = tfox carole public = no writable = yes printable = no create mask = 0765" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/configuring_a_samba_server-command_line_configuration
Chapter 35. Case files
Chapter 35. Case files A case instance is a single instance of a case definition and encapsulates the business context. All case instance data is stored in the case file, which is accessible to all process instances that might participate in the particular case instance. Each case instance and its case file are completely isolated from the other cases. Only users assigned to a required case role can access the case file. A case file is used in case management as a repository of data for the entire case instance. It contains all roles, data objects, the data map, and any other data. The case can be closed and reopened at a later date with the same case file attached. A case instance can be closed at any time and does not require a specific resolution to be completed. The case file can also include embedded documentation, references, PDF attachments, web links, and other options. 35.1. Configuring case ID prefixes The caseId parameter is a string value that is the identifier of the case instance. You can configure the Case ID Prefix in Red Hat Process Automation Manager designer to distinguish different types of cases. The following procedures uses the IT_Orders sample project to demonstrate how to create unique case ID prefixes for specific business needs. Prerequisites The IT_Orders sample project is open in Business Central. Procedure In Business Central, go to Menu Design Projects . If there are existing projects, you can access the samples by clicking the MySpace default space and selecting Try Samples from the Add Project drop-down menu. If there are no existing projects, click Try samples . Select IT_Orders and click Ok . In the Assets window, click the orderhardware business process to open the designer. Click on an empty space on the canvas and in the upper-right corner, click the Properties icon. Scroll down and expand Case Management . In the Case ID Prefix field, enter an ID value. The ID format is internally defined as ID-XXXXXXXXXX , where XXXXXXXXXX is a generated number that provides a unique ID for the case instance. If a prefix is not provided, the default prefix is CASE with the following identifiers: CASE-0000000001 CASE-0000000002 CASE-0000000003 You can specify any prefix. For example, if you specify the prefix IT , the following identifiers are generated: IT-0000000001 IT-0000000002 IT-0000000003 Figure 35.1. Case ID Prefix field 35.2. Configuring case ID expressions The following procedures uses the IT_Orders sample project to demonstrate how set metadata attribute keys to customize expressions for generating the caseId . Prerequisites The IT_Orders sample project is open in Business Central. Procedure In Business Central, go to Menu Design Projects . If there are existing projects, you can access the samples by clicking the MySpace default space and selecting Try Samples from the Add Project drop-down menu. If there are no existing projects, click Try samples . Select IT_Orders and click Ok . In the Assets window, click the orderhardware business process to open the designer. Click on an empty space on the canvas and in the upper-right corner, click the Properties icon. Expand the Advanced menu to access the Metadata Attributes fields. Specify one of the following functions for the customCaseIdPrefix metadata attribute: LPAD : Left padding RPAD : Right padding TRUNCATE : Truncate UPPER : Upper case Figure 35.2. Setting the UPPER function for the customCaseIdPrefix metadata attribute In this example, type is a variable set in the Case File Variables field, which during runtime a user may define to it the value type1 . UPPER is a pre-built function to uppercase a variable, and IT- is a static prefix. The results are dynamic case IDs such as IT-TYPE1-0000000001 , IT-TYPE1-0000000002 , and IT-TYPE1-0000000003 . Figure 35.3. Case File Variables If the customCaseIdPrefixIsSequence case metadata attribute is set to false (default value is true ), the case instance will not create any sequence and the caseIdPrefix expression is the case ID. For example, if generating case IDs based on social security numbers, no specific sequence or instance identifiers are required. The customCaseIdPrefixIsSequence metadata attribute is optionally added and set to false (default value is true ) to disable the numeric sequences for the case IDs. This is useful if an expression used for custom case IDs already contains a case file variable to express unique business identifiers instead of the generic sequence values. For example, if generating case IDs based on social security numbers, no specific sequence or instance identifiers are required. For the example below, SOCIAL_SECURITY_NUMBER is also a variable declared as a case file variable. Figure 35.4. customCaseIdPrefixIsSequence metadata attribute The IS_PREFIX_SEQUENCE case file variable is optionally added as a flag during runtime to disable or enable the sequence generation for case IDs. For example, there is no need to create a sequence suffix for medical insurance coverage for an individual. For a multi-family insurance policy, the company might set the IS_PREFIX_SEQUENCE case variable to true to aggregate a sequence number for each member of the family. The result of using the customCaseIdPrefixIsSequence metadata attribute statically as false or using the IS_PREFIX_SEQUENCE case file variable and setting during runtime for it the value false , is the same. Figure 35.5. IS_PREFIX_SEQUENCE case variable
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-case-file-con-case-management-design
Part III. Get started
Part III. Get started
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/get_started
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption, refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your vault servers. After you have addressed the above, perform the following steps: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on bare metal . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Compact mode requirements You can install OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. To configure OpenShift Container Platform in compact mode, see the Configuring a three-node cluster section of the Installing guide in OpenShift Container Platform documentation, and Delivering a Three-node Architecture for Edge Deployments . Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preparing_to_deploy_openshift_data_foundation
Chapter 35. Installation and Booting
Chapter 35. Installation and Booting Installation fails with a traceback when specifying %packages --nobase --nocore in a Kickstart file Using a Kickstart file which contains the %packages section and specifies the --nobase and --nocore options at the same time causes the installation to fail with a traceback message due to the yum-langpacks package missing. To work around this problem, add the yum-langpacks package within the %packages section when using %packages --nobase --nocore in your Kickstart file. Installation can not proceed if a root password specified in Kickstart does not pass policy requirements If you use a Kickstart file that defines a root password and the password does not fullfill requirements for the security policy selected in the Security Policy spoke, you will be unable to complete the installation. The Begin Installation button will be grayed out, and it is not possible to change the root password manually before pressing this button. To work around this problem, make sure that your Kickstart file uses a sufficiently strong password that passes requirements defined by the selected security policy. Rescue mode fails to detect and mount root volume on Btrfs The installer rescue mode (accessed from the installation media boot menu or using the inst.rescue boot option) can not detect an existing system with the / (root) directory placed on a Btrfs subvolume. Instead, an error message saying 'You don't have any linux partitions.' is displayed. To work around this problem, enter the shell and mount the root volume manually. Wrong window title in Initial Setup The Initial Setup tool, which is automatically displayed after the first post-installation reboot and which allows you to configure settings like network connections and to register your system, displays the string __main__.py in the window title. This is a cosmetic problem and has no negative impact on usability. Reinstalling on an FBA DASD on IBM System z causes the installer to crash When reinstalling Red Hat Enterprise Linux 7 on IBM System z with a Fixed Block Architecture (FBA) DASD, the installer will crash due to incomplete support for these devices. To work around this problem, ensure that any FBA DASDs are not present during the installation by placing them on the device ignore list. This should be done before launching the installer. From a root shell, use the chccwdev command followed by the cio_ignore command to manually switch devices offline and then add them to the device ignore list. Alternatively, you can remove all FBA DASD device IDs from the CMS configuration file or the parameter file instead of using these commands before beginning the installation. HyperPAV aliases are not available after installation on IBM System z A known issue prevents DASDs configured as HyperPAV aliases from being automatically attached to the system after the installation finishes. These storage devices are available at the Installation Destination screen during installation, but they are not immediately accessible after you finish installing and reboot. To fix this problem temporarily (until the reboot), remove these devices from the device blacklist using the chccwdev command: # chccwdev -e <devnumber> To make the HyperPAV aliases available persistently across reboots, add their device numbers into the /etc/dasd.conf configuration file. You can use the lsdasd command to verify that these devices are available. Generated anaconda-ks.cfg file on IBM System z can not be used to reinstall the system The anaconda-ks.cfg file, which is a Kickstart file generated during system installation and which contains all selections made during the install process, represents disk sizes as decimal numbers on IBM System z DASDs. This is because DASDs report a 4KiB alignment, which makes the calculated disk sizes incorrect as they are recorded in the Kickstart file, since only integer values are accepted. Therefore, it is not possible to re-use the generated Kickstart file to reproduce the installation. Using the anaconda-ks.cfg file on IBM System z to reinstall the system requires you to manually change all decimal values within to integers. Possible NetworkManager error message during installation When installing the system, the following error message can be displayed and logged: ERR NetworkManager: <error> [devices/nm-device.c:2590] activation_source_schedule(): (eth0): activation stage already scheduled The error message should not prevent the installation from completing. Package libocrdma is missing from the InfiniBand Support package group The libocrdma package is not included in the default package set of the InfiniBand Support group. Consequently, when users select the InfiniBand Support group and are expecting RDMA over Converged Ethernet (RoCE) to work on Emulex OneConnect adapters, the necessary driver, libocrdma , is not installed by default. On first boot, the user can manually install the missing package by issuing this command: # yum install libocrdma Alternatively, add the libocrdma package to the %packages section of your Kickstart file. As a result, the user will now be able to use the Emulex OneConnect devices in RoCE mode. Insufficient size of the /boot partition may prevent the system from upgrading The /boot partition, which contains installed kernels and initial ram disks, may become full if multiple kernels and additional packages such as kernel-debug are installed. This is caused by the default size of this partition being set to 500 MB during installation, and prevents the system from being upgraded. As a workaround, use yum to remove older kernels if you do not need them. If you are installing a new system, you should also consider this possibility, and set the /boot partition to a larger size (for example 1 GB) instead of the default (500 MB). Installation on multipath devices fails if one or more disks are missing a label When installing on multipath devices, the installer may display an error dialog if it fails to read one or more disks which are a member of the multipath. This problem is caused by one or more disks missing a disk label, and the installation can not proceed if it occurs. To work around this problem, create disk labels on all disks which are part of the multipath device you are using during the installation. Static IPv4 configuration in Kickstart is overwritten if a host name is defined in %pre script When defining a host name in the %pre section of a Kickstart file, a network command that only sets host name ("network --hostname=hn") is considered as a device configuration with default --bootproto value ("dhcp") and default --device value ("link", which means the first device with link found). The Kickstart then behaves as if network --hostname=hn --device=link was used. If the device considered as default for the --device option (the first device with link found) has already been configured to use static IPv4 configuration (for example with the preceding network command), the configuration is overriden by the IPv4 DHCP implied by the --hostname option. To work around this problem, make sure that the network command which defines the host name is used first, and the second network command which would normally be overridden is used afterwards. In cases where the network command defining a host name is the only such command in the Kickstart file, add a --device option to it with a non-existing interface (for example, network --hostname=hn --device=x ). Using the realm command in Kickstart causes the installer to crash A known issue prevents the realm command from being used in Kickstart files. Attempting to join an Active Directory or Identity Management domain during the installation using this command causes the installer to crash. To work around this problem, you can either wait until the installation finishes and join a domain manually afterwards, or you can add the realm join <realm name> command to the Kickstart file's %post section. See the realm(8) man page for information joining a domain using the command line. Installer built-in help is not updated during system upgrade When upgrading from Red Hat Enterprise Linux 7.1 to version 7.2, the built-in help for the Anaconda installer (the anaconda-user-help package) is not upgraded due to a significant change in packaging. To work around this problem, use yum to remove the anaconda-user-help package before performing the upgrade, and install it again after you finish upgrading to Red Hat Enterprise Linux 7.2. Incorrect ordering of boot menu entries generated by grubby The grubby tool, which is used to modify and update the GRUB2 boot loader configuration files, may add debug boot menu entries at the top of the list when generating the boot menu configuration file. These debug menu entries then cause normal entries to be pushed down, although they are still highlighted and selected by default. Using multiple driver update images at the same time only applies the last one specified When attempting to perform a driver update during the installation using the inst.dd=/dd.img boot option and specifying it more than once to load multiple driver update images, Anaconda will ignore all instances of the parameter except the last one. To work around this problem, you can: * Install additional drivers after the installation if possible * Use alternate means to specify a driver update image, such as the driverdisk Kickstart command * Combine multiple driver update images into a single one Installer crashes when it detects LDL-formatted DASDs The installer crashes whenever it detects the LDL (Linux Disk Layout) format on one or more DASDs on IBM System z. The crash is caused by a race condition in the libparted library and happens even if these DASDs are not selected as installation targets. Other architectures are not affected by this issue. If LDL DASDs are to be used during installation, users should manually reformat each LDL DASD as CDL (Compatible Disk Layout) using the dasdfmt command in a root shell before launching the installer. If LDL DASDs are present on a system and a user does not wish to utilize them during installation, they should be placed on the device ignore list for the duration of the installation process. This should be done before launching the installer. From a root shell, users should use the chccwdev command followed by the cio_ignore command to manually switch devices offline and then add them to the device ignore list. Alternatively, you can remove all LDL DASD device IDs from the CMS configuration file or the parameter file instead of using these commands before beginning the installation. Kernel panic on reboot after upgrading kernel and redhat-release packages Installing redhat-release-server-7.2-9.el7 and a kernel package in the same Yum transaction leads to a missing initrd line in the new kernel's menu entry in GRUB2 configuration. Attempting to boot using the latest installed kernel then causes a kernel panic due to missing initrd. This issue usually appears while upgrading your system from an earlier minor release to Red Hat Enterprise Linux 7.2 using yum update . To work around this problem, make sure to upgrade the redhat-release-server and kernel packages in separate Yum transactions. Alternatively, you can locate the new kernel's menu entry in the GRUB2 configuration file ( /boot/grub2/grub.cfg on BIOS systems and /boot/efi/EFI/redhat/grub.cfg on UEFI systems) and add the initrd manually. The initrd configuration line will look similar to initrd /initramfs-3.10.0-327.el7.x86_64.img . Make sure the file name matches the kernel (vmlinuz) configured within the same menu entry and that the file exists /boot directory. Use older menu entries for reference. Initial Setup may start in text mode even if a graphical environment is installed The Initial Setup utility, which starts after installation finishes and the installed system is booted for the first time, may in some cases start in text mode on systems where a graphical environment is available and the graphical version of Initial Setup should start. This is caused by both the graphical and text mode services for Initial Setup being enabled at the same time. To work around this problem, you can use a Kickstart file during the installation and include a %post section to disable the version of Initial Setup which you do not want to run. To make sure that the graphical variant of Initial Setup runs after installation, use the following %post section: If you want to enable the text mode variant of Initial Setup, switch the enable and disable commands in order to disable the graphical service and enable text mode. Links to non-root file systems in /lib/ and /lib64/ are removed by ldconfig.service Red Hat Enterprise Linux 7.2 introduced ldconfig.service , which is run at an early stage of the boot process, before non-root file systems are mounted. When ldconfig.service is run, links in the /lib/ and /lib64/ directories are removed if they point to file systems which are not yet mounted. To work around this problem, disable ldconfig.service with the command systemctl mask ldconfig , so these symbolic links are no longer removed, and the system boots as expected. Daemons using IPC terminate unexpectedly after update to Red Hat Enterprise Linux 7.2 A new systemd feature was introduced in Red Hat Enterprise Linux 7.2: cleanup of all allocated inter-process communication (IPC) resources with the last session a user finishes. A session can be an administrative cron job or an interactive session. This behavior can cause daemons running under the same user, and using the same resources, to terminate unexpectedly. To work around this problem, edit the file /etc/systemd/logind.conf and add the following line: Then, execute the following command, so that the change is put into effect: After performing these steps, daemons no longer crash in the described situation.
[ "%post systemctl disable initial-setup-text.service systemctl enable initial-setup-graphical.service %end", "RemoveIPC=no", "systemctl restart systemd-logind.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-installation_and_booting
Chapter 95. ExternalConfigurationEnvVarSource schema reference
Chapter 95. ExternalConfigurationEnvVarSource schema reference Used in: ExternalConfigurationEnv Property Property type Description secretKeyRef SecretKeySelector Reference to a key in a Secret. configMapKeyRef ConfigMapKeySelector Reference to a key in a ConfigMap.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-externalconfigurationenvvarsource-reference
Chapter 13. Kafka Bridge
Chapter 13. Kafka Bridge This chapter provides an overview of the AMQ Streams Kafka Bridge on Red Hat Enterprise Linux and helps you get started using its REST API to interact with AMQ Streams. To try out the Kafka Bridge in your local environment, see the Section 13.2, "Kafka Bridge quickstart" later in this chapter. Additional resources To view the API documentation, including example requests and responses, see the Kafka Bridge API reference . To configure the Kafka Bridge for distributed tracing, see Section 16.4, "Enabling tracing for the Kafka Bridge" . 13.1. Kafka Bridge overview The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to AMQ Streams, without the need for client applications to interpret the Kafka protocol. The API has two main resources-- consumers and topics --that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Clients can produce and consume messages without the requirement to use the native Kafka protocol. Similar to an AMQ Streams installation, you can download the Kafka Bridge files for installation on Red Hat Enterprise Linux. See Section 13.1.5, "Downloading a Kafka Bridge archive" . For more information on configuring the host and port for the KafkaBridge resource, see Section 13.1.6, "Configuring Kafka Bridge properties" . 13.1.1. Authentication and encryption Authentication and encryption between HTTP clients and the Kafka Bridge is not yet supported. This means that requests sent from clients to the Kafka Bridge are: Not encrypted, and must use HTTP rather than HTTPS Sent without authentication You can configure TLS or SASL-based authentication between the Kafka Bridge and your Kafka cluster. You configure the Kafka Bridge for authentication through its properties file . 13.1.2. Requests to the Kafka Bridge Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge. API request and response bodies are always encoded as JSON. 13.1.2.1. Content Type headers A Content-Type header must be submitted for all requests. The only exception is when the POST request body is empty, where adding a Content-Type header will cause the request to fail. Consumer operations ( /consumers endpoints) and producer operations ( /topics endpoints) require different Content-Type headers. Content-Type headers for consumer operations Regardless of the embedded data format , POST requests for consumer operations must provide the following Content-Type header if the request body contains data: Content-Type: application/vnd.kafka.v2+json Content-Type headers for producer operations When performing producer operations, POST requests must provide Content-Type headers specifying the embedded data format of the messages produced. This can be either json or binary . Table 13.1. Content-Type headers for data formats Embedded data format Content-Type header JSON Content-Type: application/vnd.kafka.json.v2+json Binary Content-Type: application/vnd.kafka.binary.v2+json The embedded data format is set per consumer, as described in the section. The Content-Type must not be set if the POST request has an empty body. An empty body can be used to create a consumer with the default values. 13.1.2.2. Embedded data format The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Two embedded data formats are supported: JSON or binary. When creating a consumer using the /consumers/groupid endpoint, the POST request body must specify an embedded data format of either JSON or binary. This is specified in the format field in the request body, for example: { "name": "my-consumer", "format": "binary", 1 ... } 1 A binary embedded data format. If an embedded data format for the consumer is not specified, then a binary format is set. The embedded data format specified when creating a consumer must match the data format of the Kafka messages it will consume. If you choose to specify a binary embedded data format, subsequent producer requests must provide the binary data in the request body as Base64-encoded strings. For example, when sending messages by making POST requests to the /topics/ topicname endpoint, the value must be encoded in Base64: { "records": [ { "key": "my-key", "value": "ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ=" }, ] } Producer requests must also provide a Content-Type header that corresponds to the embedded data format, for example, Content-Type: application/vnd.kafka.binary.v2+json . 13.1.2.3. Message format When sending messages using the /topics endpoint, you enter the message payload in the request body, in the records parameter. The records parameter can contain any of these optional fields: Message key Message value Destination partition Message headers Example POST request to /topics curl -X POST \ http://localhost:8080/topics/my-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" "partition": 2 "headers": [ { "key": "key1", "value": "QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==" 1 } ] }, ] }' 1 The header value in binary format and encoded as Base64. 13.1.2.4. Accept headers After creating a consumer, all subsequent GET requests must provide an Accept header in the following format: Accept: application/vnd.kafka. embedded-data-format .v2+json The embedded-data-format is either json or binary . For example, when retrieving records for a subscribed consumer using an embedded data format of JSON, include this Accept header: Accept: application/vnd.kafka.json.v2+json 13.1.3. Configuring loggers for the Kafka Bridge The AMQ Streams Kafka bridge allows you to set a different log level for each operation that is defined by the related OpenAPI specification. Each operation has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to produce more or less fine-grained logging information about the incoming and outgoing HTTP requests. Loggers are defined in the log4j.properties file, which has the following default configuration for healthy and ready endpoints: The log level of all other operations is set to INFO by default. Loggers are formatted as follows: Where <operation-id> is the identifier of the specific operation. Following is the list of operations defined by the OpenAPI specification: createConsumer deleteConsumer subscribe unsubscribe poll assign commit send sendToPartition seekToBeginning seekToEnd seek healthy ready openapi 13.1.4. Kafka Bridge API resources For the full list of REST API endpoints and descriptions, including example requests and responses, see the Kafka Bridge API reference . 13.1.5. Downloading a Kafka Bridge archive A zipped distribution of the AMQ Streams Kafka Bridge is available for download from the Red Hat website. Procedure Download the latest version of the Red Hat AMQ Streams Kafka Bridge archive from the Customer Portal . 13.1.6. Configuring Kafka Bridge properties This procedure describes how to configure the Kafka and HTTP connection properties used by the AMQ Streams Kafka Bridge. You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties. kafka. for general configuration that applies to producers and consumers, such as server connection and security. kafka.consumer. for consumer-specific configuration passed only to the consumer. kafka.producer. for producer-specific configuration passed only to the producer. As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP methods to access them. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster . Prerequisites AMQ Streams is installed on the host The Kafka Bridge installation archive is downloaded Procedure Edit the application.properties file provided with the AMQ Streams Kafka Bridge installation archive. Use the properties file to specify Kafka and HTTP-related properties, and to enable distributed tracing. Configure standard Kafka-related properties, including properties specific to the Kafka consumers and producers. Use: kafka.bootstrap.servers to define the host/port connections to the Kafka cluster kafka.producer.acks to provide acknowledgments to the HTTP client kafka.consumer.auto.offset.reset to determine how to manage reset of the offset in Kafka For more information on configuration of Kafka properties, see the Apache Kafka website Configure HTTP-related properties to enable HTTP access to the Kafka cluster. For example: http.enabled=true http.host=0.0.0.0 http.port=8080 1 http.cors.enabled=true 2 http.cors.allowedOrigins=https://strimzi.io 3 http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH 4 1 The default HTTP configuration for the Kafka Bridge to listen on port 8080. 2 Set to true to enable CORS. 3 Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression. 4 Comma-separated list of allowed HTTP methods for CORS. Enable or disable distributed tracing. bridge.tracing=jaeger Remove code comments from the property to enable distributed tracing Additional resources Chapter 16, Distributed tracing Section 16.4, "Enabling tracing for the Kafka Bridge" 13.1.7. Installing the Kafka Bridge Follow this procedure to install the AMQ Streams Kafka Bridge on Red Hat Enterprise Linux. Prerequisites AMQ Streams is installed on the host The Kafka Bridge installation archive is downloaded The Kafka Bridge configuration properties are set Procedure If you have not already done so, unzip the AMQ Streams Kafka Bridge installation archive to any directory. Run the Kafka Bridge script using the configuration properties as a parameter: For example: ./bin/kafka_bridge_run.sh --config-file=_path_/configfile.properties Check to see that the installation was successful in the log. HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092 13.2. Kafka Bridge quickstart Use this quickstart to try out the AMQ Streams Kafka Bridge on Red Hat Enterprise Linux. You will learn how to: Install the Kafka Bridge Produce messages to topics and partitions in your Kafka cluster Create a Kafka Bridge consumer Perform basic consumer operations, such as subscribing the consumer to topics and retrieving the messages that you produced In this quickstart, HTTP requests are formatted as curl commands that you can copy and paste to your terminal. Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter. About data formats In this quickstart, you will produce and consume messages in JSON format, not binary. For more information on the data formats and HTTP headers used in the example requests, see Section 13.1.1, "Authentication and encryption" . Prerequisites for the quickstart AMQ Streams is installed on the host A single node AMQ Streams cluster is running The Kafka Bridge installation archive is downloaded 13.2.1. Deploying the Kafka Bridge locally Deploy an instance of the AMQ Streams Kafka Bridge to the host. Use the application.properties file provided with the installation archive to apply the default configuration settings. Procedure Open the application.properties file and check that the default HTTP related settings are defined: http.enabled=true http.host=0.0.0.0 http.port=8080 This configures the Kafka Bridge to listen for requests on port 8080. Run the Kafka Bridge script using the configuration properties as a parameter: ./bin/kafka_bridge_run.sh --config-file=<path>/application.properties What to do Produce messages to topics and partitions . 13.2.2. Producing messages to topics and partitions Produce messages to a topic in JSON format by using the topics endpoint. You can specify destination partitions for messages in the request body, as shown below. The partitions endpoint provides an alternative method for specifying a single destination partition for all messages as a path parameter. Procedure Create a Kafka topic using the kafka-topics.sh utility: bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic bridge-quickstart-topic --partitions 3 --replication-factor 1 --config retention.ms=7200000 --config segment.bytes=1073741824 Specify three partitions. Verify that the topic was created: bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic bridge-quickstart-topic Using the Kafka Bridge, produce three messages to the topic you created: curl -X POST \ http://localhost:8080/topics/bridge-quickstart-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" }, { "value": "sales-lead-0002", "partition": 2 }, { "value": "sales-lead-0003" } ] }' sales-lead-0001 is sent to a partition based on the hash of the key. sales-lead-0002 is sent directly to partition 2. sales-lead-0003 is sent to a partition in the bridge-quickstart-topic topic using a round-robin method. If the request is successful, the Kafka Bridge returns an offsets array, along with a 200 (OK) code and a content-type header of application/vnd.kafka.v2+json . For each message, the offsets array describes: The partition that the message was sent to The current message offset of the partition Example response #... { "offsets":[ { "partition":0, "offset":0 }, { "partition":2, "offset":0 }, { "partition":0, "offset":1 } ] } What to do After producing messages to topics and partitions, create a Kafka Bridge consumer . Additional resources POST /topics/{topicname} in the API reference documentation. POST /topics/{topicname}/partitions/{partitionid} in the API reference documentation. 13.2.3. Creating a Kafka Bridge consumer Before you can perform any consumer operations on the Kafka cluster, you must first create a consumer by using the consumers endpoint. The consumer is referred to as a Kafka Bridge consumer . Procedure Create a Kafka Bridge consumer in a new consumer group named bridge-quickstart-consumer-group : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "name": "bridge-quickstart-consumer", "auto.offset.reset": "earliest", "format": "json", "enable.auto.commit": false, "fetch.min.bytes": 512, "consumer.request.timeout.ms": 30000 }' The consumer is named bridge-quickstart-consumer and the embedded data format is set as json . The consumer will not commit offsets to the log automatically because the enable.auto.commit setting is false . You will commit the offsets manually later in this quickstart. Note The Kafka Bridge generates a random consumer name if you do not specify a consumer name in the request body. If the request is successful, the Kafka Bridge returns the consumer ID ( instance_id ) and base URL ( base_uri ) in the response body, along with a 200 (OK) code. Example response #... { "instance_id": "bridge-quickstart-consumer", "base_uri":"http://<bridge-name>-bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer" } Copy the base URL ( base_uri ) to use in the other consumer operations in this quickstart. What to do Now that you have created a Kafka Bridge consumer, you can subscribe it to topics . Additional resources POST /consumers/{groupid} in the API reference documentation. 13.2.4. Subscribing a Kafka Bridge consumer to topics Subscribe the Kafka Bridge consumer to one or more topics by using the subscription endpoint. Once subscribed, the consumer starts receiving all messages that are produced to the topic. Procedure Subscribe the consumer to the bridge-quickstart-topic topic that you created earlier, in Producing messages to topics and partitions : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "topics": [ "bridge-quickstart-topic" ] }' The topics array can contain a single topic (as shown above) or multiple topics. If you want to subscribe the consumer to multiple topics that match a regular expression, you can use the topic_pattern string instead of the topics array. If the request is successful, the Kafka Bridge returns a 204 No Content code only. What to do After subscribing a Kafka Bridge consumer to topics, you can retrieve messages from the consumer . Additional resources POST /consumers/{groupid}/instances/{name}/subscription in the API reference documentation. 13.2.5. Retrieving the latest messages from a Kafka Bridge consumer Retrieve the latest messages from the Kafka Bridge consumer by requesting data from the records endpoint. In production, HTTP clients can call this endpoint repeatedly (in a loop). Procedure Produce additional messages to the Kafka Bridge consumer, as described in Producing messages to topics and partitions . Submit a GET request to the records endpoint: curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \ -H 'accept: application/vnd.kafka.json.v2+json' After creating and subscribing to a Kafka Bridge consumer, a first GET request will return an empty response because the poll operation triggers a rebalancing process to assign partitions. Repeat step two to retrieve messages from the Kafka Bridge consumer. The Kafka Bridge returns an array of messages - describing the topic name, key, value, partition, and offset - in the response body, along with a 200 (OK) code. Messages are retrieved from the latest offset by default. HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json #... [ { "topic":"bridge-quickstart-topic", "key":"my-key", "value":"sales-lead-0001", "partition":0, "offset":0 }, { "topic":"bridge-quickstart-topic", "key":null, "value":"sales-lead-0003", "partition":0, "offset":1 }, #... Note If an empty response is returned, produce more records to the consumer as described in Producing messages to topics and partitions , and then try retrieving messages again. What to do After retrieving messages from a Kafka Bridge consumer, try committing offsets to the log . Additional resources GET /consumers/{groupid}/instances/{name}/records in the API reference documentation. 13.2.6. Commiting offsets to the log Use the offsets endpoint to manually commit offsets to the log for all messages received by the Kafka Bridge consumer. This is required because the Kafka Bridge consumer that you created earlier, in Creating a Kafka Bridge consumer , was configured with the enable.auto.commit setting as false . Procedure Commit offsets to the log for the bridge-quickstart-consumer : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets Because no request body is submitted, offsets are committed for all the records that have been received by the consumer. Alternatively, the request body can contain an array ( OffsetCommitSeekList ) that specifies the topics and partitions that you want to commit offsets for. If the request is successful, the Kafka Bridge returns a 204 No Content code only. What to do After committing offsets to the log, try out the endpoints for seeking to offsets . Additional resources POST /consumers/{groupid}/instances/{name}/offsets in the API reference documentation. 13.2.7. Seeking to offsets for a partition Use the positions endpoints to configure the Kafka Bridge consumer to retrieve messages for a partition from a specific offset, and then from the latest offset. This is referred to in Apache Kafka as a seek operation. Procedure Seek to a specific offset for partition 0 of the quickstart-bridge-topic topic: curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "offsets": [ { "topic": "bridge-quickstart-topic", "partition": 0, "offset": 2 } ] }' If the request is successful, the Kafka Bridge returns a 204 No Content code only. Submit a GET request to the records endpoint: curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \ -H 'accept: application/vnd.kafka.json.v2+json' The Kafka Bridge returns messages from the offset that you seeked to. Restore the default message retrieval behavior by seeking to the last offset for the same partition. This time, use the positions/end endpoint. curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "partitions": [ { "topic": "bridge-quickstart-topic", "partition": 0 } ] }' If the request is successful, the Kafka Bridge returns another 204 No Content code. Note You can also use the positions/beginning endpoint to seek to the first offset for one or more partitions. What to do In this quickstart, you have used the AMQ Streams Kafka Bridge to perform several common operations on a Kafka cluster. You can now delete the Kafka Bridge consumer that you created earlier. Additional resources POST /consumers/{groupid}/instances/{name}/positions in the API reference documentation. POST /consumers/{groupid}/instances/{name}/positions/beginning in the API reference documentation. POST /consumers/{groupid}/instances/{name}/positions/end in the API reference documentation. 13.2.8. Deleting a Kafka Bridge consumer Finally, delete the Kafa Bridge consumer that you used throughout this quickstart. Procedure Delete the Kafka Bridge consumer by sending a DELETE request to the instances endpoint. curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer If the request is successful, the Kafka Bridge returns a 204 No Content code only. Additional resources DELETE /consumers/{groupid}/instances/{name} in the API reference documentation.
[ "Content-Type: application/vnd.kafka.v2+json", "{ \"name\": \"my-consumer\", \"format\": \"binary\", 1 }", "{ \"records\": [ { \"key\": \"my-key\", \"value\": \"ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ=\" }, ] }", "curl -X POST http://localhost:8080/topics/my-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" \"partition\": 2 \"headers\": [ { \"key\": \"key1\", \"value\": \"QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==\" 1 } ] }, ] }'", "Accept: application/vnd.kafka. embedded-data-format .v2+json", "Accept: application/vnd.kafka.json.v2+json", "log4j.logger.http.openapi.operation.healthy=WARN, out log4j.additivity.http.openapi.operation.healthy=false log4j.logger.http.openapi.operation.ready=WARN, out log4j.additivity.http.openapi.operation.ready=false", "log4j.logger.http.openapi.operation.<operation-id>", "http.enabled=true http.host=0.0.0.0 http.port=8080 1 http.cors.enabled=true 2 http.cors.allowedOrigins=https://strimzi.io 3 http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH 4", "bridge.tracing=jaeger", "./bin/kafka_bridge_run.sh --config-file=_path_/configfile.properties", "HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092", "http.enabled=true http.host=0.0.0.0 http.port=8080", "./bin/kafka_bridge_run.sh --config-file=<path>/application.properties", "bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic bridge-quickstart-topic --partitions 3 --replication-factor 1 --config retention.ms=7200000 --config segment.bytes=1073741824", "bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic bridge-quickstart-topic", "curl -X POST http://localhost:8080/topics/bridge-quickstart-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" }, { \"value\": \"sales-lead-0002\", \"partition\": 2 }, { \"value\": \"sales-lead-0003\" } ] }'", "# { \"offsets\":[ { \"partition\":0, \"offset\":0 }, { \"partition\":2, \"offset\":0 }, { \"partition\":0, \"offset\":1 } ] }", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"name\": \"bridge-quickstart-consumer\", \"auto.offset.reset\": \"earliest\", \"format\": \"json\", \"enable.auto.commit\": false, \"fetch.min.bytes\": 512, \"consumer.request.timeout.ms\": 30000 }'", "# { \"instance_id\": \"bridge-quickstart-consumer\", \"base_uri\":\"http://<bridge-name>-bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer\" }", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"topics\": [ \"bridge-quickstart-topic\" ] }'", "curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'", "HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json # [ { \"topic\":\"bridge-quickstart-topic\", \"key\":\"my-key\", \"value\":\"sales-lead-0001\", \"partition\":0, \"offset\":0 }, { \"topic\":\"bridge-quickstart-topic\", \"key\":null, \"value\":\"sales-lead-0003\", \"partition\":0, \"offset\":1 }, #", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"offsets\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0, \"offset\": 2 } ] }'", "curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"partitions\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0 } ] }'", "curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/kafka-bridge-concepts-str
17.4. DNS and DHCP
17.4. DNS and DHCP IP information can be assigned to guests via DHCP. A pool of addresses can be assigned to a virtual network switch for this purpose. Libvirt uses the dnsmasq program for this. An instance of dnsmasq is automatically configured and started by libvirt for each virtual network switch that needs it. Figure 17.4. Virtual network switch running dnsmasq
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-networking_protocols-dns_and_dhcp
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core CPU. A quad core CPU or multiple dual core CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. Virtual machine consoles are accessed through the SPICE, VNC, or RDP (Windows only) protocols. The QXL graphical driver can be installed in the guest operating system for improved/enhanced SPICE functionalities. SPICE currently supports a maximum resolution of 2560x1600 pixels. Supported QXL drivers are available on Red Hat Enterprise Linux, Windows XP, and Windows 7. SPICE support is divided into tiers: Tier 1: Operating systems on which Remote Viewer has been fully tested and is supported. Tier 2: Operating systems on which Remote Viewer is partially tested and is likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with remote-viewer on this tier. Table 2.3. Client Operating System SPICE Support Support Tier Operating System Tier 1 Red Hat Enterprise Linux 7.2 and later Microsoft Windows 7 Tier 2 Microsoft Windows 8 Microsoft Windows 10 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 7 that has been updated to the latest minor release. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . For more information on the requirements and limitations that apply to guests see https://access.redhat.com/articles/rhel-limits and https://access.redhat.com/articles/906543 . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere Sandybridge Haswell Haswell-noTSX Broadwell Broadwell-noTSX Skylake (client) Skylake (server) IBM POWER8 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. The maximum supported RAM per VM in Red Hat Virtualization Host is 4 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, Red Hat recommends using the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 15 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB swap - 1 GB (for the recommended swap size, see https://access.redhat.com/solutions/15244 ) Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 55 GB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 5 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Red Hat recommends that each host have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. Red Hat recommends that all PCIe switches and bridges between the PCIe device and the root port support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 7 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Predefined mdev_type set to correspond with one of the mdev types supported by the device vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking Requirements 2.3.1. General Requirements Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, IPMI Fencing, and Metrics Store The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Red Hat strongly recommends using DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. Metrics Store, Kibana, and ElasticSearch For Metrics Store, Kibana, and ElasticSearch, see Network Configuration for Metrics Store virtual machines . 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically, but this overwrites any pre-existing firewall configuration if you are using iptables . If you want to keep the existing firewall configuration, you must manually insert the firewall rules required by the Manager. The engine-setup command saves a list of the iptables rules required in the /etc/ovirt-engine/iptables.example file. If you are using firewalld , engine-setup does not overwrite the existing configuration. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. If the websocket proxy is running on a different host, however, this port is not used. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager (ImageIO Proxy server) Required for communication with the ImageIO Proxy ( ovirt-imageio-proxy ). Yes M8 6442 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see https://access.redhat.com/solutions/2772331 . Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager (ImageIO Proxy server) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ImageIO daemon ( ovirt-imageio-daemon ). Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, such as Red Hat CloudForms, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.6. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/RHV_requirements
Chapter 4. Snapshot management
Chapter 4. Snapshot management As a storage administrator, being familiar with Ceph's snapshotting feature can help you manage the snapshots and clones of images stored in the Red Hat Ceph Storage cluster. 4.1. Prerequisites A running Red Hat Ceph Storage cluster. 4.2. Ceph block device snapshots A snapshot is a read-only copy of the state of an image at a particular point in time. One of the advanced features of Ceph block devices is that you can create snapshots of the images to retain a history of an image's state. Ceph also supports snapshot layering, which allows you to clone images quickly and easily, for example a virtual machine image. Ceph supports block device snapshots using the rbd command and many higher level interfaces, including QEMU , libvirt , OpenStack and CloudStack. Note If a snapshot is taken while I/O is occurring, then the snapshot might not get the exact or latest data of the image and the snapshot might have to be cloned to a new image to be mountable. Red Hat recommends stopping I/O before taking a snapshot of an image. If the image contains a filesystem, the filesystem must be in a consistent state before taking a snapshot. To stop I/O you can use fsfreeze command. For virtual machines, the qemu-guest-agent can be used to automatically freeze filesystems when creating a snapshot. Additional Resources See the fsfreeze(8) man page for more details. 4.3. The Ceph user and keyring When cephx is enabled, you must specify a user name or ID and a path to the keyring containing the corresponding key for the user. Note cephx is enabled by default. You might also add the CEPH_ARGS environment variable to avoid re-entry of the following parameters: Syntax Example Tip Add the user and secret to the CEPH_ARGS environment variable so that you do not need to enter them each time. 4.4. Creating a block device snapshot Create a snapshot of a Ceph block device. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap create option, the pool name and the image name: Syntax Example 4.5. Listing the block device snapshots List the block device snapshots. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the pool name and the image name: Syntax Example 4.6. Rolling back a block device snapshot Rollback a block device snapshot. Note Rolling back an image to a snapshot means overwriting the current version of the image with data from a snapshot. The time it takes to execute a rollback increases with the size of the image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is the preferred method of returning to a pre-existing state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap rollback option, the pool name, the image name and the snap name: Syntax Example 4.7. Deleting a block device snapshot Delete a snapshot for Ceph block devices. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap rm option, the pool name, the image name and the snapshot name: Syntax Example Important If an image has any clones, the cloned images retain reference to the parent image snapshot. To delete the parent image snapshot, you must flatten the child images first. Note Ceph OSD daemons delete data asynchronously, so deleting a snapshot does not free up the disk space immediately. Additional Resources See the Flattening cloned images in the Red Hat Ceph Storage Block Device Guide for details. 4.8. Purging the block device snapshots Purge block device snapshots. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap purge option and the image name: Syntax Example 4.9. Renaming a block device snapshot Rename a block device snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To rename a snapshot: Syntax Example This renames snap1 snapshot of the dataset image on the data pool to snap2 . Execute the rbd help snap rename command to display additional details on renaming snapshots. 4.10. Ceph block device layering Ceph supports the ability to create many copy-on-write (COW) or copy-on-read (COR) clones of a block device snapshot. Snapshot layering enables Ceph block device clients to create images very quickly. For example, you might create a block device image with a Linux VM written to it. Then, snapshot the image, protect the snapshot, and create as many clones as you like. A snapshot is read-only, so cloning a snapshot simplifies semantics- making it possible to create clones rapidly. Note The terms parent and child mean a Ceph block device snapshot, parent, and the corresponding image cloned from the snapshot, child. These terms are important for the command line usage below. Each cloned image, the child, stores a reference to its parent image, which enables the cloned image to open the parent snapshot and read it. This reference is removed when the clone is flattened that is, when information from the snapshot is completely copied to the clone. A clone of a snapshot behaves exactly like any other Ceph block device image. You can read to, write from, clone, and resize the cloned images. There are no special restrictions with cloned images. However, the clone of a snapshot refers to the snapshot, so you MUST protect the snapshot before you clone it. A clone of a snapshot can be a copy-on-write (COW) or copy-on-read (COR) clone. Copy-on-write (COW) is always enabled for clones while copy-on-read (COR) has to be enabled explicitly. Copy-on-write (COW) copies data from the parent to the clone when it writes to an unallocated object within the clone. Copy-on-read (COR) copies data from the parent to the clone when it reads from an unallocated object within the clone. Reading data from a clone will only read data from the parent if the object does not yet exist in the clone. Rados block device breaks up large images into multiple objects. The default is set to 4 MB and all copy-on-write (COW) and copy-on-read (COR) operations occur on a full object, that is writing 1 byte to a clone will result in a 4 MB object being read from the parent and written to the clone if the destination object does not already exist in the clone from a COW/COR operation. Whether or not copy-on-read (COR) is enabled, any reads that cannot be satisfied by reading an underlying object from the clone will be rerouted to the parent. Since there is practically no limit to the number of parents, meaning that you can clone a clone, this reroute continues until an object is found or you hit the base parent image. If copy-on-read (COR) is enabled, any reads that fail to be satisfied directly from the clone result in a full object read from the parent and writing that data to the clone so that future reads of the same extent can be satisfied from the clone itself without the need of reading from the parent. This is essentially an on-demand, object-by-object flatten operation. This is specially useful when the clone is in a high-latency connection away from it's parent, that is the parent in a different pool, in another geographical location. Copy-on-read (COR) reduces the amortized latency of reads. The first few reads will have high latency because it will result in extra data being read from the parent, for example, you read 1 byte from the clone but now 4 MB has to be read from the parent and written to the clone, but all future reads will be served from the clone itself. To create copy-on-read (COR) clones from snapshot you have to explicitly enable this feature by adding rbd_clone_copy_on_read = true under [global] or [client] section in the ceph.conf file. Additional Resources For more information on flattening , see the Flattening cloned images section in the Red Hat Ceph Storage Block Device Gudie . 4.11. Protecting a block device snapshot Clones access the parent snapshots. All clones would break if a user inadvertently deleted the parent snapshot. To prevent data loss, by default, you MUST protect the snapshot before you can clone it. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify POOL_NAME , IMAGE_NAME , and SNAP_SHOT_NAME in the following command: Syntax Example Note You cannot delete a protected snapshot. 4.12. Cloning a block device snapshot Clone a block device snapshot to create a read or write child image of the snapshot within the same pool or in another pool. One use case would be to maintain read-only images and snapshots as templates in one pool, and writable clones in another pool. Important By default, you must protect the snapshot before you can clone it. To avoid having to protect the snapshot before you clone it, set ceph osd set-require-min-compat-client mimic . You can set it to higher versions than mimic as well. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To clone a snapshot, you need to specify the parent pool, snapshot, child pool and image name: Syntax Example 4.13. Unprotecting a block device snapshot Before you can delete a snapshot, you must unprotect it first. Additionally, you may NOT delete snapshots that have references from clones. You must flatten each clone of a snapshot, before you can delete the snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Run the following commands: Syntax Example 4.14. Listing the children of a snapshot List the children of a snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To list the children of a snapshot, execute the following: Syntax Example 4.15. Flattening cloned images Cloned images retain a reference to the parent snapshot. When you remove the reference from the child clone to the parent snapshot, you effectively "flatten" the image by copying the information from the snapshot to the clone. The time it takes to flatten a clone increases with the size of the snapshot. Because a flattened image contains all the information from the snapshot, a flattened image will use more storage space than a layered clone. Note If the deep flatten feature is enabled on an image, the image clone is dissociated from its parent by default. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To delete a parent image snapshot associated with child images, you must flatten the child images first: Syntax Example
[ "rbd --id USER_ID --keyring=/path/to/secret [commands] rbd --name USERNAME --keyring=/path/to/secret [commands]", "rbd --id admin --keyring=/etc/ceph/ceph.keyring [commands] rbd --name client.admin --keyring=/etc/ceph/ceph.keyring [commands]", "rbd --pool POOL_NAME snap create --snap SNAP_NAME IMAGE_NAME rbd snap create POOL_NAME / IMAGE_NAME @ SNAP_NAME", "rbd --pool rbd snap create --snap snapname foo rbd snap create rbd/foo@snapname", "rbd --pool POOL_NAME snap ls IMAGE_NAME rbd snap ls POOL_NAME / IMAGE_NAME", "rbd --pool rbd snap ls foo rbd snap ls rbd/foo", "rbd --pool POOL_NAME snap rollback --snap SNAP_NAME IMAGE_NAME rbd snap rollback POOL_NAME / IMAGE_NAME @ SNAP_NAME", "rbd --pool rbd snap rollback --snap snapname foo rbd snap rollback rbd/foo@snapname", "rbd --pool POOL_NAME snap rm --snap SNAP_NAME IMAGE_NAME rbd snap rm POOL_NAME -/ IMAGE_NAME @ SNAP_NAME", "rbd --pool rbd snap rm --snap snapname foo rbd snap rm rbd/foo@snapname", "rbd --pool POOL_NAME snap purge IMAGE_NAME rbd snap purge POOL_NAME / IMAGE_NAME", "rbd --pool rbd snap purge foo rbd snap purge rbd/foo", "rbd snap rename POOL_NAME / IMAGE_NAME @ ORIGINAL_SNAPSHOT_NAME POOL_NAME / IMAGE_NAME @ NEW_SNAPSHOT_NAME", "rbd snap rename data/dataset@snap1 data/dataset@snap2", "rbd --pool POOL_NAME snap protect --image IMAGE_NAME --snap SNAPSHOT_NAME rbd snap protect POOL_NAME / IMAGE_NAME @ SNAPSHOT_NAME", "rbd --pool rbd snap protect --image my-image --snap my-snapshot rbd snap protect rbd/my-image@my-snapshot", "rbd --pool POOL_NAME --image PARENT_IMAGE --snap SNAP_NAME --dest-pool POOL_NAME --dest CHILD_IMAGE_NAME rbd clone POOL_NAME / PARENT_IMAGE @ SNAP_NAME POOL_NAME / CHILD_IMAGE_NAME", "rbd --pool rbd --image my-image --snap my-snapshot --dest-pool rbd --dest new-image rbd clone rbd/my-image@my-snapshot rbd/new-image", "rbd --pool POOL_NAME snap unprotect --image IMAGE_NAME --snap SNAPSHOT_NAME rbd snap unprotect POOL_NAME / IMAGE_NAME @ SNAPSHOT_NAME", "rbd --pool rbd snap unprotect --image my-image --snap my-snapshot rbd snap unprotect rbd/my-image@my-snapshot", "rbd --pool POOL_NAME children --image IMAGE_NAME --snap SNAP_NAME rbd children POOL_NAME / IMAGE_NAME @ SNAPSHOT_NAME", "rbd --pool rbd children --image my-image --snap my-snapshot rbd children rbd/my-image@my-snapshot", "rbd --pool POOL_NAME flatten --image IMAGE_NAME rbd flatten POOL_NAME / IMAGE_NAME", "rbd --pool rbd flatten --image my-image rbd flatten rbd/my-image" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/block_device_guide/snapshot-management
Chapter 1. OpenShift Platform Plus overview
Chapter 1. OpenShift Platform Plus overview OpenShift Platform Plus is a single hybrid-cloud platform for enterprises. Use it to build, deploy, run, and manage intelligent applications securely for multiple infrastructures. It is based on Red Hat Enterprise Linux (RHEL), Kubernetes, and Red Hat OpenShift Container Platform and includes the following products: Red Hat Advanced Cluster Management for Kubernetes - Controls clusters and applications from a single console. Red Hat Advanced Cluster Security for Kubernetes - Provides information about cluster security, visibility management, and security compliance. Red Hat Quay - Stores, builds, and deploys container images. Red Hat OpenShift Data Foundation Essentials - Provides a permanent place for data storage when clusters start and stop for multiple environments. 1.1. OpenShift Platform Plus description and architecture OpenShift Platform Plus builds on the capabilities of OpenShift Container Platform with the following features: Multi-cluster security Complete management capabilities Integrated data management A global container registry OpenShift Platform Plus protects and manages applications for open hybrid cloud environments and application lifecycles. OpenShift Platform Plus supports these additional capabilities: Platform services Service mesh, serverless Builds, CI/CD pipelines GitOps, Distributed tracing Log management Cost management Vulnerability management Compliance Application services Languages and runtimes API management Integration Messaging Process automation Data services Databases and cache Data ingest and preparation Data analytics AI/ML Developer services Developer CLI/IDE Plug-ins and extensions Red Hat OpenShift Dev Spaces Red Hat OpenShift Local 1.2. Install OpenShift Platform Plus products To install OpenShift Platform Plus, you must install OpenShift Container Platform followed by Red Hat Advanced Cluster Management. Install the additional products by applying the Red Hat Advanced Cluster Management policy sets: Red Hat Quay, Red Hat OpenShift Data Foundation Essentials, and Red Hat Advanced Cluster Security. See the Red Hat OpenShift Platform Plus policy set for detailed information about installing the products. Additional resources Supported installation methods for different platforms Installing Red Hat Advanced Cluster Management 1.2.1. Installing OpenShift Platform Plus by using policy sets The OpenShift Container Platform installation uses two policy sets to install additional products. Note Edit the policyGenerator.yaml file to remove any products that you do not want to install. You can delete the product entries or comment out the lines. To install the policy sets using gitops, follow the steps in Deploying policies using gitops . Use the actual path to the policy set instead of the path in the example path policygenerator/policy-sets/stable/openshift-plus . Use the following procedure to install the policy sets by using CLI: Procedure Install the PolicyGenerator plugin by following the instructions in Installing and using the PolicyGenerator Kustomize plug-in . Clone the policy-collection repository: USD git clone https://github.com/stolostron/policy-collection Navigate to the policy set directory: USD cd policy-collection/policygenerator/policy-sets/stable/openshift-plus Generate and apply the policies by using the following command: USD kustomize build --enable-alpha-plugins | oc apply -f - The policy sets install the remaining products on the OpenShift Container Platform cluster. 1.3. OpenShift Platform Plus product release notes The release note information for each product is accessible from the following list: OpenShift Container Platform Red Hat Advanced Cluster Management for Kubernetes Red Hat Quay Release Notes Red Hat Advanced Cluster Security for Kubernetes 4.4 Red Hat OpenShift Data Foundation 1.4. OpenShift Platform Plus product release compatibility matrix OpenShift Platform Plus product releases are based on the latest verified OpenShift Container Platform release. The verified release number for each OpenShift Platform Plus product is listed in this section. See OpenShift Operator Life Cycles for detailed information about additional supported versions. Note This table is updated after each product release is tested and verified with the latest OpenShift Container Platform release. Use the versions in the matrix for the optimal performance. OpenShift Container Platform Red Hat Advanced Cluster Management Red Hat Advanced Cluster Security Red Hat Quay Red Hat OpenShift Data Foundation Essentials 4.17 2.12 4.6 3.13 4.17 4.16 2.11 4.5 3.12 4.16 4.15 2.10 4.4 3.11 4.15 4.14 2.9 4.3 3.10 4.14 4.12 2.7 3.7.4 3.8 4.12 1.5. Get support for OpenShift Platform Plus Red Hat offers cluster administrator tools for gathering data, monitoring, and troubleshooting your cluster. If you need help with your OpenShift Platform Plus solution, log a case in the appropriate product by using its subscription name. See the Red Hat customer support portal to open a support case.
[ "git clone https://github.com/stolostron/policy-collection", "cd policy-collection/policygenerator/policy-sets/stable/openshift-plus", "kustomize build --enable-alpha-plugins | oc apply -f -" ]
https://docs.redhat.com/en/documentation/openshift_platform_plus/4/html/architecture/opp-architecture
Chapter 11. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a vaccination appointment scheduler quick start guide
Chapter 11. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a vaccination appointment scheduler quick start guide You can use the OptaPlanner vaccination appointment scheduler quick start to develop a vaccination schedule that is both efficient and fair. The vaccination appointment scheduler uses artificial intelligence (AI) to prioritize people and allocate time slots based on multiple constraints and priorities. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA, VS Code, Eclipse, or NetBeans is available. You have created a Quakus OptaPlanner project as described in Chapter 6, Getting Started with OptaPlanner and Quarkus . 11.1. How the OptaPlanner vaccination appointment scheduler works There are two main approaches to scheduling appointments. The system can either let a person choose an appointment slot (user-selects) or the system assigns a slot and tells the person when and where to attend (system-automatically-assigns). The OptaPlanner vaccination appointment scheduler uses the system-automatically-assigns approach. With the OptaPlanner vaccination appointment scheduler, you can create an application where people provide their information to the system and the system assigns an appointment. Characteristics of this approach: Appointment slots are allocated based on priority. The system allocates the best appointment time and location based on preconfigured planning constraints. The system is not overwhelmed by a large number of users competing for a limited number of appointments. This approach solves the problem of vaccinating as many people as possible by using planning constraints to create a score for each person. The person's score determines when they get an appointment. The higher the person's score, the better chance they have of receiving an earlier appointment. 11.1.1. OptaPlanner vaccination appointment scheduler constraints OptaPlanner vaccination appointment scheduler constraints are either hard, medium, or soft: Hard constraints cannot be broken. If any hard constraint is broken, the plan is unfeasible and cannot be executed: Capacity: Do not over-book vaccine capacity at any time at any location. Vaccine max age: If a vaccine has a maximum age, do not administer it to people who at the time of the first dose vaccination are older than the vaccine maximum age. Ensure people are given a vaccine type appropriate for their age. For example, do not assign a 75 year old person an appointment for a vaccine that has a maximum age restriction of 65 years. Required vaccine type: Use the required vaccine type. For example, the second dose of a vaccine must be the same vaccine type as the first dose. Ready date: Administer the vaccine on or after the specified date. For example, if a person receives a second dose, do not administer it before the recommended earliest possible vaccination date for the specific vaccine type, for example 26 days after the first dose. Due date: Administer the vaccine on or before the specified date. For example, if a person receives a second dose, administer it before the recommended vaccination final due date for the specific vaccine, for example three months after the first dose. Restrict maximum travel distance: Assign each person to one of a group of vaccination centers nearest to them. This is typically one of three centers. This restriction is calculated by travel time, not distance, so a person that lives in an urban area usually has a lower maximum distance to travel than a person living in a rural area. Medium constraints decide who does not get an appointment when there is not enough capacity to assign appointments to everyone. This is called overconstrained planning: Schedule second dose vaccinations: Do not leave any second dose vaccination appointments unassigned unless the ideal date falls outside of the planning window. Schedule people based on their priority rating: Each person has a priority rating. This is typically their age but it can be much higher if they are, for example, a health care worker. Leave only people with the lowest priority ratings unassigned. They will be considered in the run. This constraint is softer than the constraint because the second dose is always prioritized over priority rating. Soft constraints should not be broken: Preferred vaccination center: If a person has a preferred vaccination center, give them an appointment at that center. Distance: Minimize the distance that a person must travel to their assigned vaccination center. Ideal date: Administer the vaccine on or as close to the specified date as possible. For example, if a person receives a second dose, administer it on the ideal date for the specific vaccine, for example 28 days after the first dose. This constraint is softer than the distance constraint to avoid sending people halfway across the country just to be one day closer to their ideal date. Priority rating: Schedule people with a higher priority rating earlier in the planning window. This constraint is softer than the distance constraint to avoid sending people halfway across the country. This constraint is also softer than the ideal date constraint because the second dose is prioritized over priority rating. Hard constraints are weighted against other hard constraints. Soft constraints are weighted against other soft constraints. However, hard constraints always take precedence over medium and soft constraints. If a hard constraint is broken, then the plan is not feasible. But if no hard constraints are broken then soft and medium constraints are considered in order to determine priority. Because there are often more people than available appointment slots, you must prioritize. Second dose appointments are always assigned first to avoid creating a backlog that would overwhelm your system later. After that, people are assigned based on their priority rating. Everyone starts with a priority rating that is their age. Doing this prioritizes older people over younger people. After that, people that are in specific priority groups receive, for example, a few hundred extra points. This varies based on the priority of their group. For example, nurses might receive an extra 1000 points. This way, older nurses are prioritized over younger nurses and young nurses are prioritized over people who are not nurses. The following table illustrates this concept: Table 11.1. Priority rating table Age Job Priority rating 60 nurse 1060 33 nurse 1033 71 retired 71 52 office worker 52 11.1.2. The OptaPlanner solver At the core of OptaPlanner is the solver, the engine that takes the problem data set and overlays the planning constraints and configurations. The problem data set includes all of the information about the people, the vaccines, and the vaccination centers. The solver works through the various combinations of data and eventually determines an optimized appointment schedule with people assigned to vaccination appointments at a specific center. The following illustration shows a schedule that the solver created: 11.1.3. Continuous planning Continuous planning is the technique of managing one or more upcoming planning periods at the same time and repeating that process monthly, weekly, daily, hourly, or even more frequently. The planning window advances incrementally by a specified interval. The following illustration shows a two week planning window that is updated daily: The two week planning window is divided in half. The first week is in the published state and the second week is in the draft state. People are assigned to appointments in both the published and draft parts of the planning window. However, only people in the published part of the planning window are notified of their appointments. The other appointments can still change easily in the run. Doing this gives OptaPlanner the flexibility to change the appointments in the draft part when you run the solver again, if necessary. For example, if a person who needs a second dose has a ready date of Monday and an ideal date of Wednesday, OptaPlanner does not have to give them an appointment for Monday if you can prove OptaPlanner can demonstrate that it can give them a draft appointment later in the week. You can determine the size of the planning window but just be aware of the size of the problem space. The problem space is all of the various elements that go into creating the schedule. The more days you plan ahead, the larger the problem space. 11.1.4. Pinned planning entities If you are continuously planning on a daily basis, there will be appointments within the two week period that are already allocated to people. To ensure that appointments are not double-booked, OptaPlanner marks existing appointments as allocated by pinning them. Pinning is used to anchor one or more specific assignments and force OptaPlanner to schedule around those fixed assignments. A pinned planning entity, such as an appointment, does not change during solving. Whether an entity is pinned or not is determined by the appointment state. An appointment can have five states : Open , Invited , Accepted , Rejected , or Rescheduled . Note You do not actually see these states directly in the quick start demo code because the OptaPlanner engine is only interested in whether the appointment is pinned or not. You need to be able to plan around appointments that have already been scheduled. An appointment with the Invited or Accepted state is pinned. Appointments with the Open , Reschedule , and Rejected state are not pinned and are available for scheduling. In this example, when the solver runs it searches across the entire two week planning window in both the published and draft ranges. The solver considers any unpinned entities, appointments with the Open , Reschedule , or Rejected states, in addition to the unscheduled input data, to find the optimal solution. If the solver is run daily, you will see a new day added to the schedule before you run the solver. Notice that the appointments on the new day have been assigned and Amy and Edna who were previously scheduled in the draft part of the planning window are now scheduled in the published part of the window. This was possible because Gus and Hugo requested a reschedule. This will not cause any confusion because Amy and Edna were never notified about their draft dates. Now, because they have appointments in the published section of the planning window, they will be notified and asked to accept or reject their appointments, and their appointments are now pinned. 11.2. Downloading and running the OptaPlanner vaccination appointment scheduler Download the OptaPlanner vaccination appointment scheduler quick start archive, start it in Quarkus development mode, and view the application in a browser. Quarkus development mode enables you to make changes and update your application while it is running. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Navigate to the optaplanner-quickstarts-8.13.0.Final-redhat-00013 directory. Navigate to the optaplanner-quickstarts-8.13.0.Final-redhat-00013/use-cases/vaccination-scheduling directory. Enter the following command to start the OptaPlanner vaccination appointment scheduler in development mode: USD mvn quarkus:dev To view the OptaPlanner vaccination appointment scheduler, enter the following URL in a web browser. To run the OptaPlanner vaccination appointment scheduler, click Solve . Make changes to the source code then press the F5 key to refresh your browser. Notice that the changes that you made are now available. 11.3. Package and run the OptaPlanner vaccination appointment scheduler When you have completed development work on the OptaPlanner vaccination appointment scheduler in quarkus:dev mode, run the application as a conventional jar file. Prerequisites You have downloaded the OptaPlanner vaccination appointment scheduler quick start. For more information, see Section 11.2, "Downloading and running the OptaPlanner vaccination appointment scheduler" . Procedure Navigate to the /use-cases/vaccination-scheduling directory. To compile the OptaPlanner vaccination appointment scheduler, enter the following command: USD mvn package To run the compiled OptaPlanner vaccination appointment scheduler, enter the following command: USD java -jar ./target/*-runner.jar Note To run the application on port 8081, add -Dquarkus.http.port=8081 to the preceding command. To start the OptaPlanner vaccination appointment scheduler, enter the following URL in a web browser. 11.4. Run the OptaPlanner vaccination appointment scheduler as a native executable To take advantage of the small memory footprint and access speeds that Quarkus offers, compile the OptaPlanner vaccination appointment scheduler in Quarkus native mode. Procedure Install GraalVM and the native-image tool. For information, see Configuring GraalVMl on the Quarkus website. Navigate to the /use-cases/vaccination-scheduling directory. To compile the OptaPlanner vaccination appointment scheduler natively, enter the following command: USD mvn package -Dnative -DskipTests To run the native executable, enter the following command: USD ./target/*-runner To start the OptaPlanner vaccination appointment scheduler, enter the following URL in a web browser. 11.5. Additional resources Vaccination appointment scheduling video
[ "mvn quarkus:dev", "http://localhost:8080/", "mvn package", "java -jar ./target/*-runner.jar", "http://localhost:8080/", "mvn package -Dnative -DskipTests", "./target/*-runner", "http://localhost:8080/" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-optaplanner-vaccination_optaplanner-quickstarts
Registry
Registry OpenShift Container Platform 4.18 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/registry/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/deploying_amq_broker_on_openshift/making-open-source-more-inclusive
function::sock_prot_str2num
function::sock_prot_str2num Name function::sock_prot_str2num - Given a protocol name (string), return the corresponding protocol number. Synopsis Arguments proto The protocol name.
[ "function sock_prot_str2num:long(proto:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sock-prot-str2num
30.10. Additional Resources
30.10. Additional Resources For information on importing and exporting sudo rules when migrating your Identity Management environment to a new environment in Red Hat Enterprise Linux 7, see the Knowledgebase solution .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/sudo-related-info
20.6. Starting, Resuming, and Restoring a Virtual Machine
20.6. Starting, Resuming, and Restoring a Virtual Machine 20.6.1. Starting a Guest Virtual Machine The virsh start domain ; [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] command starts an inactive virtual machine that was already defined but whose state is inactive since its last managed save state or a fresh boot. By default, if the domain was saved by the virsh managedsave command, the domain will be restored to its state. Otherwise, it will be freshly booted. The command can take the following arguments and the name of the virtual machine is required. --console - will attach the terminal running virsh to the domain's console device. This is runlevel 3. --paused - if this is supported by the driver, it will start the guest virtual machine in a paused state --autodestroy - the guest virtual machine is automatically destroyed when virsh disconnects --bypass-cache - used if the guest virtual machine is in the managedsave --force-boot - discards any managedsave options and causes a fresh boot to occur Example 20.3. How to start a virtual machine The following example starts the guest1 virtual machine that you already created and is currently in the inactive state. In addition, the command attaches the guest's console to the terminal running virsh: 20.6.2. Configuring a Virtual Machine to be Started Automatically at Boot The virsh autostart [--disable] domain command will automatically start the guest virtual machine when the host machine boots. Adding the --disable argument to this command disables autostart. The guest in this case will not start automatically when the host physical machine boots. Example 20.4. How to make a virtual machine start automatically when the host physical machine starts The following example sets the guest1 virtual machine which you already created to autostart when the host boots: # virsh autostart guest1 20.6.3. Rebooting a Guest Virtual Machine Reboot a guest virtual machine using the virsh reboot domain [--mode modename ] command. Remember that this action will only return once it has executed the reboot, so there may be a time lapse from that point until the guest virtual machine actually reboots. You can control the behavior of the rebooting guest virtual machine by modifying the on_reboot element in the guest virtual machine's XML configuration file. By default, the hypervisor attempts to select a suitable shutdown method automatically. To specify an alternative method, the --mode argument can specify a comma separated list which includes acpi and agent . The order in which drivers will try each mode is undefined, and not related to the order specified in virsh. For strict control over ordering, use a single mode at a time and repeat the command. Example 20.5. How to reboot a guest virtual machine The following example reboots a guest virtual machine named guest1 . In this example, the reboot uses the initctl method, but you can choose any mode that suits your needs. # virsh reboot guest1 --mode initctl 20.6.4. Restoring a Guest Virtual Machine The virsh restore <file> [--bypass-cache] [--xml /path/to/file ] [--running] [--paused] command restores a guest virtual machine previously saved with the virsh save command. See Section 20.7.1, "Saving a Guest Virtual Machine's Configuration" for information on the virsh save command. The restore action restarts the saved guest virtual machine, which may take some time. The guest virtual machine's name and UUID are preserved, but the ID will not necessarily match the ID that the virtual machine had when it was saved. The virsh restore command can take the following arguments: --bypass-cache - causes the restore to avoid the file system cache but note that using this flag may slow down the restore operation. --xml - this argument must be used with an XML file name. Although this argument is usually omitted, it can be used to supply an alternative XML file for use on a restored guest virtual machine with changes only in the host-specific portions of the domain XML. For example, it can be used to account for the file naming differences in underlying storage due to disk snapshots taken after the guest was saved. --running - overrides the state recorded in the save image to start the guest virtual machine as running. --paused - overrides the state recorded in the save image to start the guest virtual machine as paused. Example 20.6. How to restore a guest virtual machine The following example restores the guest virtual machine and its running configuration file guest1-config.xml : # virsh restore guest1-config.xml --running 20.6.5. Resuming a Guest Virtual Machine The virsh resume domain command restarts the CPUs of a domain that was suspended. This operation is immediate. The guest virtual machine resumes execution from the point it was suspended. Note that this action will not resume a guest virtual machine that has been undefined. This action will not resume transient virtual machines and will only work on persistent virtual machines. Example 20.7. How to restore a suspended guest virtual machine The following example restores the guest1 virtual machine: # virsh resume guest1
[ "virsh start guest1 --console Domain guest1 started Connected to domain guest1 Escape character is ^]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-Starting_a_defined_domain
Part IV. Install
Part IV. Install
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/install
Chapter 25. Configuring Direct Deploy
Chapter 25. Configuring Direct Deploy When provisioning nodes, director mounts the overcloud base operating system image on an iSCSI mount and then copies the image to disk on each node. Direct deploy is an alternative method that writes disk images from a HTTP location directly to disk on bare metal nodes. Note Support for the iSCSI deploy interface, iscsi , will be deprecated in Red Hat OpenStack Platform (RHOSP) version 17.0, and will be removed in RHOSP 18.0. Direct deploy, direct , will be the default deploy interface from RHOSP 17.0. 25.1. Configuring the direct deploy interface on the undercloud The iSCSI deploy interface is the default deploy interface. However, you can enable the direct deploy interface to download an image from a HTTP location to the target disk. Note Support for the iSCSI deploy interface will be deprecated in Red Hat OpenStack Platform (RHOSP) version 17.0, and will be removed in RHOSP 18.0. Direct deploy will be the default deploy interface from RHOSP 17.0. Prerequisites Your overcloud node memory tmpfs must have at least 8GB of RAM. Procedure Create or modify a custom environment file /home/stack/undercloud_custom_env.yaml and specify the IronicDefaultDeployInterface . By default, the Bare Metal service (ironic) agent on each node obtains the image stored in the Object Storage service (swift) through a HTTP link. Alternatively, ironic can stream this image directly to the node through the ironic-conductor HTTP server. To change the service that provides the image, set the IronicImageDownloadSource to http in the /home/stack/undercloud_custom_env.yaml file: Include the custom environment file in the DEFAULT section of the undercloud.conf file. Perform the undercloud installation:
[ "parameter_defaults: IronicDefaultDeployInterface: direct", "parameter_defaults: IronicDefaultDeployInterface: direct IronicImageDownloadSource: http", "custom_env_files = /home/stack/undercloud_custom_env.yaml", "openstack undercloud install" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_configuring-direct-deploy
25.3. An Overview of Certificates and Security
25.3. An Overview of Certificates and Security Your secure server provides security using a combination of the Secure Sockets Layer (SSL) protocol and (in most cases) a digital certificate from a Certificate Authority (CA). SSL handles the encrypted communications as well as the mutual authentication between browsers and your secure server. The CA-approved digital certificate provides authentication for your secure server (the CA puts its reputation behind its certification of your organization's identity). When your browser is communicating using SSL encryption, the https:// prefix is used at the beginning of the Uniform Resource Locator (URL) in the navigation bar. Encryption depends upon the use of keys (think of them as secret encoder/decoder rings in data format). In conventional or symmetric cryptography, both ends of the transaction have the same key, which they use to decode each other's transmissions. In public or asymmetric cryptography, two keys co-exist: a public key and a private key. A person or an organization keeps their private key a secret and publishes their public key. Data encoded with the public key can only be decoded with the private key; data encoded with the private key can only be decoded with the public key. To set up your secure server, use public cryptography to create a public and private key pair. In most cases, you send your certificate request (including your public key), proof of your company's identity, and payment to a CA. The CA verifies the certificate request and your identity, and then sends back a certificate for your secure server. A secure server uses a certificate to identify itself to Web browsers. You can generate your own certificate (called a "self-signed" certificate), or you can get a certificate from a CA. A certificate from a reputable CA guarantees that a website is associated with a particular company or organization. Alternatively, you can create your own self-signed certificate. Note, however, that self-signed certificates should not be used in most production environments. Self-signed certificates are not automatically accepted by a user's browser - users are prompted by the browser to accept the certificate and create the secure connection. Refer to Section 25.5, "Types of Certificates" for more information on the differences between self-signed and CA-signed certificates. Once you have a self-signed certificate or a signed certificate from the CA of your choice, you must install it on your secure server.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Apache_HTTP_Secure_Server_Configuration-An_Overview_of_Certificates_and_Security
Chapter 6. Ansible validated content
Chapter 6. Ansible validated content Red Hat Ansible Automation Platform includes Ansible validated content, which complements existing Red Hat Ansible Certified Content. Ansible validated content provides an expert-led path for performing operational tasks on a variety of platforms including both Red Hat and our trusted partners. 6.1. Configuring validated or certified collections with the installer When you download and run the bundle installer, certified and validated collections are automatically uploaded. Certified collections are uploaded into the rh-certified repository. Validated collections are uploaded into the validated repository. You can change to default configuration by using two variables: Name Description automationhub_seed_collections A boolean that defines whether or not preloading is enabled. automationhub_collection_seed_repository If automationhub_seed_collections is set to true , this variable enables you to specify the type of content to upload. Possible values are certified or validated . If missing both content sets will be uploaded. 6.2. Ansible validated content Note Ansible validated content is only available with a valid subscription to Red Hat Ansible Automation Platform. Unlike Red Hat Ansible Certified Content, Ansible validated content is not supported by Red Hat or our partners. From the Red Hat Ansible Automation Platform 2.3 release, Ansible validated content is preloaded into private automation hub and can be updated manually by downloading the packages. Entity Collection name Description Published by Ansible cloud.azure_roles.load_balancer A role to manage Azure Load Balancer Red Hat Ansible Ansible cloud.azure_roles.managed_postgresql A role to manage Azure PostGreSQL Database Red Hat Ansible Ansible cloud.azure_roles.network_interface A role to manage Azure Network Interface Red Hat Ansible Ansible cloud.azure_roles.networking_stack A role to manage Azure Networking Stack Red Hat Ansible Ansible cloud.azure_roles.resource_group A role to manage Azure Resource Group Red Hat Ansible Ansible cloud.azure_roles.security_group A role to manage Azure Security Group Red Hat Ansible Ansible cloud.azure_roles.virtual_machine A role to manage Azure Virtual Machine Red Hat Ansible Ansible network.base A validated content collection to configure base config related implementation that would be used by other validated content Red Hat Ansible Ansible network.base A validated content collection to configure bgp and provide capabilities to do operational state/healthchecks Red Hat Ansible Ansible network.acls A validated content collection to configure acls and provide capabilities to do operational state/healthchecks Red Hat Ansible Ansible network.interfaces A validated content collection to configure interfaces and provide capabilities to do operational state/healthchecks Red Hat Ansible Ansible network.ospf A validated content collection to configure ospf and provide capabilities to do operational state/healthchecks Red Hat Ansible Ansible <name yet to decide> Connectivity between on-prem network device (for ex CSR) and cloud gateway(for ex AWS) Red Hat Ansible Ansible <name yet to decide> A validated content (potentially a role) on Inventory Report that returns statistics and IDs of different edge nodes and servers Red Hat Ansible Ansible Network device Inventory report using html Red Hat Ansible Ansible Network config backup Red Hat Ansible Ansible Network config restore Red Hat Ansible Ansible IOS Updater Red Hat Ansible Ansible NXOS Updater Red Hat Ansible Ansible EOS Updater Red Hat Ansible Ansible Firewall policy automation - A validated content to take care of FW policy hygiene Red Hat Ansible Ansible OSbuilder for RHEL Edge disconnected (customer request) Red Hat Ansible Ansible Middleware collection Red Hat Ansible Ansible Windows and Linux Compliance Red Hat Ansible Ansible SAP Deployment Red Hat Ansible Ansible Automation controller configuration Red Hat Ansible Ansible Execution Environment Utilities Red Hat Ansible Ansible Automation hub configuration Red Hat Ansible Ansible Ansible Automation Platform utilities Red Hat Ansible Ansible Role to deploy and migrate a web application on Amazon Web Services (AWS) Red Hat Ansible Ansible Role to deal with AWS orphaned instances by tag Red Hat Ansible Ansible Role to create a customized Amazon Machine Images (AMI) Red Hat Ansible Ansible Role to detach and delete AWS Internet Gateway (IGW)s Red Hat Ansible Ansible Role to configure a multi-region CloudTrail Red Hat Ansible Ansible Role to configure CloudTrail encryption Red Hat Ansible Ansible Role to troubleshoot EC2 instances failing to join an ECS cluster Red Hat Ansible Ansible Role to troubleshoot Relational database Service (RDS) connectivity from an instance Red Hat Ansible Ansible Role to troubleshoot Virtual Private Cloud (VPC) connectivity issues Red Hat Ansible
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_red_hat_certified_and_ansible_galaxy_collections_in_automation_hub/assembly-validated-content
Chapter 2. logind
Chapter 2. logind logind (or more specifically systemd-logind ) is a system service that manages user logins. This service is responsible for the following: keeping track of users and sessions, their processes and their idle states, creating control groups for user processes, providing PolicyKit-based access for users to operations such as system shutdown or sleep, implementing a shutdown/sleep inhibition logic for applications, handling of power/sleep hardware keys, multi-seat management, session switch management, and device access management for users, automatic spawning of text logins (gettys) on virtual terminal (console) activation and user runtime directory management. The logind service is deeply integrated with systemd , the new initialization system in Red Hat Enterprise Linux 7, and replaces the upstart initialization system from Red Hat Enterprise Linux 6. With this change comes a number of new features and functions. The following is a summary of those most significant: ConsoleKit The ConsoleKit framework is deprecated in Red Hat Enterprise Linux 7. Equivalent functionality is now provided by systemd . Both ConsoleKit and logind are services for tracking the currently running user sessions. Note ConsoleKit had the ability to run arbitrary shell scripts any time the active session on the system changed (using virtual terminal switching). This functionality is no longer provided. the /var/log/ConsoleKit/history file Previously, ConsoleKit was sending log files to /var/log/ConsoleKit/history , which the present logind does not support. The file has been replaced by the traditional wtmp and utmp files which now keep track of all logins and logouts on the system. /var/log/ConsoleKit/history provided similar information as the wtmp file, though in a different format. Given the overlap in functionality, logind only adopted the wtmp file's role. seat.d scripts Since ConsoleKit is no longer in use, seat.d scripts no longer complement the ConsoleKit framework, and have been replaced by systemd-logind . the ck-list-sessions command ConsoleKit provided the ck-list-sessions command, which returned extended information about recent users, not only regular users but also GUI access with GDM . The comparable result can now be reached by running the loginctl command: multi-seat support logind along with GDM provide the multi-seat feature with which the user can attach another monitor, mouse, or keyboard to their machine. Doing so, an additional login screen appears and the user can log in as if they were using another machine. To list seats that are available on the system, run the following command: To show the status of a specific seat on the system, run the following command: where seat is the name of the seat, for example seat0 . To assign specific hardware to a particular seat, run the following command: where seat is the name of the seat, for example seat1 , and device is the device name specified with the /sys device path, for example /sys/devices/pci0000:00/0000:00:02.0/drm/card0 . To change the assignment, assign the hardware to a different seat, or use the loginctl flush-devices command. Getting More Information systemd-logind.service (8) - The man page for logind provides more information on the logind usage and features. It also covers the APIs systemd-logind provides (logind D-Bus API documentation). logind.conf (5) - The man page for logind.conf discusses the login manager configuration file. loginctl (1) - The man page for the systemd login manager includes more information on the multi-seat feature.
[ "loginctl list-sessions", "loginctl list-seats", "loginctl seat-status seat", "loginctl attach seat device" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/logind