title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Installing Capsule Server | Installing Capsule Server Red Hat Satellite 6.11 Installing Red Hat Satellite Capsule Server Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_capsule_server/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Revised on 2024-03-12 15:32:11 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq_clients/2023.q4/html/amq_clients_overview/making-open-source-more-inclusive |
Chapter 2. Recommended performance and scalability practices | Chapter 2. Recommended performance and scalability practices 2.1. Recommended control plane practices This topic provides recommended performance and scalability practices for control planes in OpenShift Container Platform. 2.1.1. Recommended practices for scaling the cluster The guidance in this section is only relevant for installations with cloud provider integration. Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform cluster. You scale the worker machines by increasing or decreasing the number of replicas that are defined in the worker machine set. When scaling up the cluster to higher node counts: Spread nodes across all of the available zones for higher availability. Scale up by no more than 25 to 50 machines at once. Consider creating new compute machine sets in each available zone with alternative instance types of similar size to help mitigate any periodic provider capacity constraints. For example, on AWS, use m5.large and m5d.large. Note Cloud providers might implement a quota for API services. Therefore, gradually scale the cluster. The controller might not be able to create the machines if the replicas in the compute machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which OpenShift Container Platform is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which OpenShift Container Platform is deployed has API request limits; excessive queries might lead to machine creation failures due to cloud platform limitations. Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines. Note When scaling large and dense clusters to lower node counts, it might take large amounts of time because the process involves draining or evicting the objects running on the nodes being terminated in parallel. Also, the client might start to throttle the requests if there are too many objects to evict. The default client queries per second (QPS) and burst rates are currently set to 50 and 100 respectively. These values cannot be modified in OpenShift Container Platform. 2.1.2. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16, but 24 if using the OVN-Kubernetes network plug-in 64, but 128 if using the OVN-Kubernetes network plug-in 501, but untested with the OVN-Kubernetes network plug-in 4000 16 96 The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes. On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the Running phase. Operator Lifecycle Manager (OLM) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.17 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. Clusters that use a control plane machine set to manage control plane machines. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Note In OpenShift Container Platform 4.17, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 2.1.2.1. Selecting a larger Amazon Web Services instance type for control plane machines If the control plane machines in an Amazon Web Services (AWS) cluster require more resources, you can select a larger AWS instance type for the control plane machines to use. Note The procedure for clusters that use a control plane machine set is different from the procedure for clusters that do not use a control plane machine set. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 2.1.2.1.1. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Managing control plane machines with control plane machine sets 2.1.2.1.2. Changing the Amazon Web Services instance type by using the AWS console You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the instance type in the AWS console. Prerequisites You have access to the AWS console with the permissions required to modify the EC2 Instance for your cluster. You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Open the AWS console and fetch the instances for the control plane machines. Choose one control plane machine instance. For the selected control plane machine, back up the etcd data by creating an etcd snapshot. For more information, see "Backing up etcd". In the AWS console, stop the control plane machine instance. Select the stopped instance, and click Actions Instance Settings Change instance type . Change the instance to a larger type, ensuring that the type is the same base as the selection, and apply changes. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Start the instance. If your OpenShift Container Platform cluster has a corresponding Machine object for the instance, update the instance type of the object to match the instance type set in the AWS console. Repeat this process for each control plane machine. Additional resources Backing up etcd AWS documentation about changing the instance type 2.2. Recommended infrastructure practices This topic provides recommended performance and scalability practices for infrastructure in OpenShift Container Platform. 2.2.1. Infrastructure node sizing Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results observed in cluster-density testing detailed in the Control plane node sizing section, where the monitoring stack and the default ingress-controller were moved to these nodes. Number of worker nodes Cluster density, or number of namespaces CPU cores Memory (GB) 27 500 4 24 120 1000 8 48 252 4000 16 128 501 4000 32 128 In general, three infrastructure nodes are recommended per cluster. Important These sizing recommendations should be used as a guideline. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. In addition, the router resource usage can also be affected by the number of routes and the amount/type of inbound requests. These recommendations apply only to infrastructure nodes hosting Monitoring, Ingress and Registry infrastructure components installed during cluster creation. Note In OpenShift Container Platform 4.17, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. This influences the stated sizing recommendations. 2.2.2. Scaling the Cluster Monitoring Operator OpenShift Container Platform exposes metrics that the Cluster Monitoring Operator (CMO) collects and stores in the Prometheus-based monitoring stack. As an administrator, you can view dashboards for system resources, containers, and components metrics in the OpenShift Container Platform web console by navigating to Observe Dashboards . 2.2.3. Prometheus database storage requirements Red Hat performed various tests for different scale sizes. Note The following Prometheus storage requirements are not prescriptive and should be used as a reference. Higher resource consumption might be observed in your cluster depending on workload activity and resource density, including the number of pods, containers, routes, or other resources exposing metrics collected by Prometheus. You can configure the size-based data retention policy to suit your storage requirements. Table 2.1. Prometheus Database storage requirements based on number of nodes/pods in the cluster Number of nodes Number of pods (2 containers per pod) Prometheus storage growth per day Prometheus storage growth per 15 days Network (per tsdb chunk) 50 1800 6.3 GB 94 GB 16 MB 100 3600 13 GB 195 GB 26 MB 150 5400 19 GB 283 GB 36 MB 200 7200 25 GB 375 GB 46 MB Approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed the calculated value. The above calculation is for the default OpenShift Container Platform Cluster Monitoring Operator. Note CPU utilization has minor impact. The ratio is approximately 1 core out of 40 per 50 nodes and 1800 pods. Recommendations for OpenShift Container Platform Use at least two infrastructure (infra) nodes. Use at least three openshift-container-storage nodes with non-volatile memory express (SSD or NVMe) drives. 2.2.4. Configuring cluster monitoring You can increase the storage capacity for the Prometheus component in the cluster monitoring stack. Procedure To increase the storage capacity for Prometheus: Create a YAML configuration file, cluster-monitoring-config.yaml . For example: apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring 1 The default value of Prometheus retention is PROMETHEUS_RETENTION_PERIOD=15d . Units are measured in time using one of these suffixes: s, m, h, d. 2 4 The storage class for your cluster. 3 A typical value is PROMETHEUS_STORAGE_SIZE=2000Gi . Storage values can be a plain integer or a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 5 A typical value is ALERTMANAGER_STORAGE_SIZE=20Gi . Storage values can be a plain integer or a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. Add values for the retention period, storage class, and storage sizes. Save the file. Apply the changes by running: USD oc create -f cluster-monitoring-config.yaml 2.2.5. Additional resources Infrastructure Nodes in OpenShift 4 OpenShift Container Platform cluster maximums Creating infrastructure machine sets 2.3. Recommended etcd practices To ensure optimal performance and scalability for etcd in OpenShift Container Platform, you can complete the following practices. 2.3.1. Storage practices for etcd Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because the consensus protocol for etcd depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes that are I/O sensitive or intensive and share the same underlying I/O infrastructure. Run etcd on a block device that can write at least 50 IOPS of 8KB sequentially, including fdatasync, in under 10ms. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as the fio command. To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads. Note The load on etcd arises from static factors, such as the number of nodes and pods, and dynamic factors, including changes in endpoints due to pod autoscaling, pod restarts, job executions, and other workload-related events. To accurately size your etcd setup, you must analyze the specific requirements of your workload. Consider the number of nodes, pods, and other relevant factors that impact the load on etcd. The following hard drive practices provide optimal etcd performance: Use dedicated etcd drives. Avoid drives that communicate over the network, such as iSCSI. Do not place log files or other heavy workloads on etcd drives. Prefer drives with low latency to support fast read and write operations. Prefer high-bandwidth writes for faster compactions and defragmentation. Prefer high-bandwidth reads for faster recovery from failures. Use solid state drives as a minimum selection. Prefer NVMe drives for production environments. Use server-grade hardware for increased reliability. Avoid NAS or SAN setups and spinning drives. Ceph Rados Block Device (RBD) and other types of network-attached storage can result in unpredictable network latency. To provide fast storage to etcd nodes at scale, use PCI passthrough to pass NVM devices directly to the nodes. Always benchmark by using utilities such as fio . You can use such utilities to continuously monitor the cluster performance as it increases. Avoid using the Network File System (NFS) protocol or other network based file systems. Some key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. Note The etcd member database sizes can vary in a cluster during normal operations. This difference does not affect cluster upgrades, even if the leader size is different from the other members. 2.3.2. Validating the hardware for etcd To validate the hardware for etcd before or after you create the OpenShift Container Platform cluster, you can use fio. Prerequisites Container runtimes such as Podman or Docker are installed on the machine that you are testing. Data is written to the /var/lib/etcd path. Procedure Run fio and analyze the results: If you use Podman, run this command: USD sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf If you use Docker, run this command: USD sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms. A few of the most important etcd metrics that might affected by I/O performance are as follows: etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd's WAL fsync duration etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration etcd_server_leader_changes_seen_total metric reports the leader changes Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms. Additional resources How to use fio to check etcd disk performance in OpenShift Container Platform etcd performance troubleshooting guide for OpenShift Container Platform 2.3.3. Node scaling for etcd In general, clusters must have 3 control plane nodes. However, if your cluster is installed on a bare metal platform, you can scale a cluster up to 5 control plane nodes as a postinstallation task. For example, to scale from 3 to 4 control plane nodes after installation, you can add a host and install it as a control plane node. Then, the etcd Operator scales accordingly to account for the additional control plane node. Scaling a cluster to 4 or 5 control plane nodes is available only on bare metal platforms. For more information about how to scale control plane nodes by using the Assisted Installer, see "Adding hosts with the API" and "Installing a primary control plane node on a healthy cluster". The following table shows failure tolerance for clusters of different sizes: Table 2.2. Failure tolerances by cluster size Cluster size Majority Failure tolerance 1 node 1 0 3 nodes 2 1 4 nodes 3 1 5 nodes 3 2 For more information about recovering from quorum loss, see "Restoring to a cluster state". Additional resources Adding hosts with the API Installing a primary control plane node on a healthy cluster Expanding the cluster Restoring to a cluster state 2.3.4. Moving etcd to a different disk You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues. The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OpenShift Container Platform 4.17 container storage. Note This encoded script only supports device names for the following device types: SCSI or SATA /dev/sd* Virtual device /dev/vd* NVMe /dev/nvme*[0-9]*n* Limitations When the new disk is attached to the cluster, the etcd database is part of the root mount. It is not part of the secondary disk or the intended disk when the primary node is recreated. As a result, the primary node will not create a separate /var/lib/etcd mount. Prerequisites You have a backup of your cluster's etcd data. You have installed the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. Add additional disks before uploading the machine configuration. The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role] . This applies to a controller, worker, or a custom pool. Note This procedure does not move parts of the root file system, such as /var/ , to another disk or partition on an installed node. Important This procedure is not supported when using control plane machine sets. Procedure Attach the new disk to the cluster and verify that the disk is detected in the node by running the lsblk command in a debug shell: USD oc debug node/<node_name> # lsblk Note the device name of the new disk reported by the lsblk command. Create the following script and name it etcd-find-secondary-device.sh : #!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid "USD{device}" &> /dev/null if [ USD? == 2 ]; then echo "secondary device found USD{device}" echo "creating filesystem for etcd mount" mkfs.xfs -L var-lib-etcd -f "USD{device}" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo "Couldn't find secondary block device!" >&2 exit 77 1 Replace <device_type_glob> with a shell glob for your block device type. For SCSI or SATA drives, use /dev/sd* ; for virtual drives, use /dev/vd* ; for NVMe drives, use /dev/nvme*[0-9]*n* . Create a base64-encoded string from the etcd-find-secondary-device.sh script and note its contents: USD base64 -w0 etcd-find-secondary-device.sh Create a MachineConfig YAML file named etcd-mc.yml with contents such as the following: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target 1 Replace <encoded_etcd_find_secondary_device_script> with the encoded script contents that you noted. Verification steps Run the grep /var/lib/etcd /proc/mounts command in a debug shell for the node to ensure that the disk is mounted: USD oc debug node/<node_name> # grep -w "/var/lib/etcd" /proc/mounts Example output /dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 Additional resources Red Hat Enterprise Linux CoreOS (RHCOS) 2.3.5. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 2.3.5.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 2.3.5.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 2.3.6. Setting tuning parameters for etcd You can set the control plane hardware speed to "Standard" , "Slower" , or the default, which is "" . The default setting allows the system to decide which speed to use. This value enables upgrades from versions where this feature does not exist, as the system can select values from versions. By selecting one of the other values, you are overriding the default. If you see many leader elections due to timeouts or missed heartbeats and your system is set to "" or "Standard" , set the hardware speed to "Slower" to make the system more tolerant to the increased latency. 2.3.6.1. Changing hardware speed tolerance To change the hardware speed tolerance for etcd, complete the following steps. Procedure Check to see what the current value is by entering the following command: USD oc describe etcd/cluster | grep "Control Plane Hardware Speed" Example output Control Plane Hardware Speed: <VALUE> Note If the output is empty, the field has not been set and should be considered as the default (""). Change the value by entering the following command. Replace <value> with one of the valid values: "" , "Standard" , or "Slower" : USD oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}' The following table indicates the heartbeat interval and leader election timeout for each profile. These values are subject to change. Profile ETCD_HEARTBEAT_INTERVAL ETCD_LEADER_ELECTION_TIMEOUT "" Varies depending on platform Varies depending on platform Standard 100 1000 Slower 500 2500 Review the output: Example output etcd.operator.openshift.io/cluster patched If you enter any value besides the valid values, error output is displayed. For example, if you entered "Faster" as the value, the output is as follows: Example output The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower" Verify that the value was changed by entering the following command: USD oc describe etcd/cluster | grep "Control Plane Hardware Speed" Example output Control Plane Hardware Speed: "" Wait for etcd pods to roll out: USD oc get pods -n openshift-etcd -w The following output shows the expected entries for master-0. Before you continue, wait until all masters show a status of 4/4 Running . Example output installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s Enter the following command to review to the values: USD oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT Note These values might not have changed from the default. Additional resources Understanding feature gates 2.3.7. Increasing the database size for etcd You can set the disk quota in gibibytes (GiB) for each etcd instance. If you set a disk quota for your etcd instance, you can specify integer values from 8 to 32. The default value is 8. You can specify only increasing values. You might want to increase the disk quota if you encounter a low space alert. This alert indicates that the cluster is too large to fit in etcd despite automatic compaction and defragmentation. If you see this alert, you need to increase the disk quota immediately because after etcd runs out of space, writes fail. Another scenario where you might want to increase the disk quota is if you encounter an excessive database growth alert. This alert is a warning that the database might grow too large in the four hours. In this scenario, consider increasing the disk quota so that you do not eventually encounter a low space alert and possible write fails. If you increase the disk quota, the disk space that you specify is not immediately reserved. Instead, etcd can grow to that size if needed. Ensure that etcd is running on a dedicated disk that is larger than the value that you specify for the disk quota. For large etcd databases, the control plane nodes must have additional memory and storage. Because you must account for the API server cache, the minimum memory required is at least three times the configured size of the etcd database. Important Increasing the database size for etcd is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.3.7.1. Changing the etcd database size To change the database size for etcd, complete the following steps. Procedure Check the current value of the disk quota for each etcd instance by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" Example output Backend Quota Gi B: <value> Change the value of the disk quota by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": <value>}}' Example output etcd.operator.openshift.io/cluster patched Verification Verify that the new value for the disk quota is set by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" The etcd Operator automatically rolls out the etcd instances with the new values. Verify that the etcd pods are up and running by entering the following command: USD oc get pods -n openshift-etcd The following output shows the expected entries. Example output NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m Verify that the disk quota value is updated for the etcd pod by entering the following command: USD oc describe -n openshift-etcd pod/<etcd_podname> | grep "ETCD_QUOTA_BACKEND_BYTES" The value might not have changed from the default value of 8 . Example output ETCD_QUOTA_BACKEND_BYTES: 8589934592 Note While the value that you set is an integer in GiB, the value shown in the output is converted to bytes. 2.3.7.2. Troubleshooting If you encounter issues when you try to increase the database size for etcd, the following troubleshooting steps might help. 2.3.7.2.1. Value is too small If the value that you specify is less than 8 , you see the following error message: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 5}}' Example error message The Etcd "cluster" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased To resolve this issue, specify an integer between 8 and 32 . 2.3.7.2.2. Value is too large If the value that you specify is greater than 32 , you see the following error message: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 64}}' Example error message The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32 To resolve this issue, specify an integer between 8 and 32 . 2.3.7.2.3. Value is decreasing If the value is set to a valid value between 8 and 32 , you cannot decrease the value. Otherwise, you see an error message. Check to see the current value by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" Example output Backend Quota Gi B: 10 Decrease the disk quota value by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 8}}' Example error message The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased To resolve this issue, specify an integer greater than 10 . | [
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"providerSpec: value: instanceType: <compatible_aws_instance_type> 1",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring",
"oc create -f cluster-monitoring-config.yaml",
"sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf",
"sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf",
"oc debug node/<node_name>",
"lsblk",
"#!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid \"USD{device}\" &> /dev/null if [ USD? == 2 ]; then echo \"secondary device found USD{device}\" echo \"creating filesystem for etcd mount\" mkfs.xfs -L var-lib-etcd -f \"USD{device}\" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo \"Couldn't find secondary block device!\" >&2 exit 77",
"base64 -w0 etcd-find-secondary-device.sh",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target",
"oc debug node/<node_name>",
"grep -w \"/var/lib/etcd\" /proc/mounts",
"/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0",
"etcd member has been defragmented: <member_name> , memberID: <member_id>",
"failed defrag on member: <member_name> , memberID: <member_id> : <error_message>",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"",
"Control Plane Hardware Speed: <VALUE>",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"controlPlaneHardwareSpeed\": \"<value>\"}}'",
"etcd.operator.openshift.io/cluster patched",
"The Etcd \"cluster\" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: \"Faster\": supported values: \"\", \"Standard\", \"Slower\"",
"oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"",
"Control Plane Hardware Speed: \"\"",
"oc get pods -n openshift-etcd -w",
"installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s",
"oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT",
"oc describe etcd/cluster | grep \"Backend Quota\"",
"Backend Quota Gi B: <value>",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": <value>}}'",
"etcd.operator.openshift.io/cluster patched",
"oc describe etcd/cluster | grep \"Backend Quota\"",
"oc get pods -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m",
"oc describe -n openshift-etcd pod/<etcd_podname> | grep \"ETCD_QUOTA_BACKEND_BYTES\"",
"ETCD_QUOTA_BACKEND_BYTES: 8589934592",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 5}}'",
"The Etcd \"cluster\" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 64}}'",
"The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32",
"oc describe etcd/cluster | grep \"Backend Quota\"",
"Backend Quota Gi B: 10",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 8}}'",
"The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/recommended-performance-and-scalability-practices-2 |
Chapter 4. Configuration | Chapter 4. Configuration This chapter explores how to provide additions to the OpenStack Puppet modules. This includes some basic guidelines on developing Puppet modules. 4.1. Learning Puppet Basics The following section provide a few basic to help you understand Puppet's syntax and the structure of a Puppet module. 4.1.1. Examining the Anatomy of a Puppet Module Before contributing to the OpenStack modules, we need to understand the components that create a Puppet module. Manifests Manifests are files that contain code to define a set of resource and their attributes. A resource is any configurable part of a system. Examples of resources include packages, services, files, users and groups, SELinux configuration, SSH key authentication, cron jobs, and more. A manifest defines each required resource using a set of key-value pairs for their attributes. For example: This declaration checks if the httpd package is installed. If not, the manifest executes dnf and installs it. Manifests are located in the manifest directory of a module. Puppet modules also use a test directory for test manifests. These manifests are used to test certain classes contained in your official manifests. Classes Classes act as a method for unifying multiple resources in a manifest. For example, if installing and configuring a HTTP server, you might create a class with three resources: one to install the HTTP server packages, one to configure the HTTP server, and one to start or enable the server. You can also refer to classes from other modules, which applies their configuration. For example, if you had to configure an application that also required a webserver, you can refer to the previously mentioned class for the HTTP server. Static Files Modules can contain static files that Puppet can copy to certain locations on your system. These locations, and other attributes such as permissions, are defined through file resource declarations in manifests. Static files are located in the files directory of a module. Templates Sometimes configuration files require custom content. In this situation, users would create a template instead of a static file. Like static files, templates are defined in manifests and copied to locations on a system. The difference is that templates allow Ruby expressions to define customized content and variable input. For example, if you wanted to configure httpd with a customizable port then the template for the configuration file would include: The httpd_port variable in this case is defined in the manifest that references this template. Templates are located in the templates directory of a module. Plugins Plugins allow for aspects that extend beyond the core functionality of Puppet. For example, you can use plugins to define custom facts, custom resources, or new functions. For example, a database administrator might need a resource type for PostgreSQL databases. This could help the database administrator populate PostgreSQL with a set of new databases after installing PostgreSQL. As a result, the database administrator need only create a Puppet manifest that ensures PostgreSQL installs and the databases are created afterwards. Plugins are located in the lib directory of a module. This includes a set of subdirectories depending on the plugin type. For example: /lib/facter - Location for custom facts. /lib/puppet/type - Location for custom resource type definitions, which outline the key-value pairs for attributes. /lib/puppet/provider - Location for custom resource providers, which are used in conjunction with resource type definitions to control resources. /lib/puppet/parser/functions - Location for custom functions. 4.1.2. Installing a Service Some software requires package installations. This is one function a Puppet module can perform. This requires a resource definition that defines configurations for a certain package. For example, to install the httpd package through the mymodule module, you would add the following content to a Puppet manifest in the mymodule module: This code defines a subclass of mymodule called httpd , then defines a package resource declaration for the httpd package. The ensure => installed attribute tells Puppet to check if the package is installed. If it is not installed, Puppet executes dnf to install it. 4.1.3. Starting and Enabling a Service After installing a package, you might aim to start the service. Use another resource declaration called service . This requires editing the manifest with the following content: This achieves a couple of things: The ensure => running attribute checks if the service is running. If not, Puppet enables it. The enable => true attribute sets the service to run when the system boots. The require => Package["httpd"] attribute defines an ordering relationship between one resource declaration and another. In this case, it ensures the httpd service starts after the httpd package installs. This creates a dependency between the service and its respective package. 4.1.4. Configuring a Service The two steps show how to install and enable a service through Puppet. However, you might aim to provide some custom configuration to the services. In our example, the HTTP server already provides some default configuration in /etc/httpd/conf/httpd.conf , which provides a web host on port 80. This section adds some extra configuration to provide an additional web host on a user-specified port. For this to occur, you use a template file to store the HTTP configuration file. This is because the user-defined port requires variable input. In the module's templates directory, you would add a file called myserver.conf.erb with the following contents: This template follows the standard syntax for Apache web server configuration. The only difference is the inclusion of Ruby escape characters to inject variables from our module. For example, httpd_port , which we use to specify the web server port. Notice also the inclusion of fqdn , which is a variable that stores the fully qualified domain name of the system. This is known as a system fact . System facts are collected from each system prior to generating each respective system's Puppet catalog. Puppet uses the facter command to gather these system facts and you can also run facter to view a list of these facts. After saving this file, you would add the resource to module's Puppet manifest : This achieves the following: We add a file resource declaration for the server configuration file ( /etc/httpd/conf.d/myserver.conf ). The content for this file is the myserver.conf.erb template we created earlier. We also check the httpd package is installed before adding this file. We also add a second file resource declaration. This one creates a directory ( /var/www/myserver ) for our web server. We also add a relationship between the configuration file and the httpd service using the notify => Service["httpd"] attribute. This checks our configuration file for any changes. If the file has changed, Puppet restarts the service. 4.2. Obtaining OpenStack Puppet Modules The Red Hat OpenStack Platform uses the official OpenStack Puppet modules, which you obtain from the openstack group on Github . Navigate your browser to https://github.com/openstack and in the filters section search for puppet . All Puppet module use the prefix puppet- . For this example, we will examine the official OpenStack Block Storage ( cinder ), which you can clone using the following command: This creates a clone of the Puppet module for Cinder. 4.3. Adding Configuration for a Puppet Module The OpenStack modules primarily aim to configure the core service. Most also contain additional manifests to configure additional services, sometimes known as backends , agents , or plugins . For example, the cinder module contains a directory called backends , which contains configuration options for different storage devices including NFS, iSCSI, Red Hat Ceph Storage, and others. For example, the manifests/backends/nfs.pp file contains the following configuration This achieves a couple of things: The define statement creates a defined type called cinder::backend::nfs . A defined type is similar to a class; the main difference is Puppet evaluates a defined type multiple times. For example, you might require multiple NFS backends and as such the configuration requires multiple evaluations for each NFS share. The few lines define the parameters in this configuration and their default values. The default values are overwritten if the user passes new values to the cinder::backend::nfs defined type. The file function is a resource declaration that calls for the creation of a file. This file contains a list of our NFS shares and name for this file is defined in the parameters ( USDnfs_shares_config = '/etc/cinder/shares.conf' ). Note the additional attributes: The content attribute creates a list using the USDnfs_servers parameter. The require attribute ensures that the cinder package is installed. The notify attribute tells the cinder-volume service to reset. The cinder_config function is a resource declaration that uses a plugin from the lib/puppet/ directory in the module. This plugin adds configuration to the /etc/cinder/cinder.conf file. Each line in this resource adds a configuration options to the relevant section in the cinder.conf file. For example, if the USDname parameter is mynfs , then the following attributes: Would save the following to the cinder.conf file: The create_resources function converts a hash into a set of resources. In this case, the manifest converts the USDextra_options hash to a set of additional configuration options for the backend. This provides a flexible method to add further configuration options not included in the manifest's core parameters. This shows the importance of including a manifest to configure your hardware's OpenStack driver. The manifest provides a simple method for the director to include configuration options relevant to your hardware. This acts as a main integration point for the director to configure your Overcloud to use your hardware. 4.4. Adding Hiera Data to Puppet Configuration Puppet contains a tool called Hiera , which acts as a key/value systems that provides node-specific configuration. These keys and their values are usually stored in files located in /etc/puppet/hieradata . The /etc/puppet/hiera.yaml file defines the order that Puppet reads the files in the hieradata directory. When configuring the Overcloud, Puppet uses this data to overwrite the default values for certain Puppet classes. For example, the default NFS mount options for cinder::backend::nfs in puppet-cinder are undefined: However, you can create your own manifest that calls the cinder::backend::nfs defined type and replace this option with Hiera data: This means the nfs_mount_options parameter takes uses Hiera data value from the cinder_nfs_mount_options key: Alternatively, you can use the Hiera data to overwrite cinder::backend::nfs::nfs_mount_options parameter directly so that it applies to all evalutations of the NFS configuration. For example: The above Hiera data overwrites this parameter on each evaluation of cinder::backend::nfs . | [
"package { 'httpd': ensure => installed, }",
"Listen <%= @httpd_port %>",
"class mymodule::httpd { package { 'httpd': ensure => installed, } }",
"class mymodule::httpd { package { 'httpd': ensure => installed, } service { 'httpd': ensure => running, enable => true, require => Package[\"httpd\"], } }",
"Listen <%= @httpd_port %> NameVirtualHost *:<%= @httpd_port %> <VirtualHost *:<%= @httpd_port %>> DocumentRoot /var/www/myserver/ ServerName *:<%= @fqdn %>> <Directory \"/var/www/myserver/\"> Options All Indexes FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost>",
"class mymodule::httpd { package { 'httpd': ensure => installed, } service { 'httpd': ensure => running, enable => true, require => Package[\"httpd\"], } file {'/etc/httpd/conf.d/myserver.conf': notify => Service[\"httpd\"], ensure => file, require => Package[\"httpd\"], content => template(\"mymodule/myserver.conf.erb\"), } file { \"/var/www/myserver\": ensure => \"directory\", } }",
"git clone https://github.com/openstack/puppet-cinder.git",
"define cinder::backend::nfs ( USDvolume_backend_name = USDname, USDnfs_servers = [], USDnfs_mount_options = undef, USDnfs_disk_util = undef, USDnfs_sparsed_volumes = undef, USDnfs_mount_point_base = undef, USDnfs_shares_config = '/etc/cinder/shares.conf', USDnfs_used_ratio = '0.95', USDnfs_oversub_ratio = '1.0', USDextra_options = {}, ) { file {USDnfs_shares_config: content => join(USDnfs_servers, \"\\n\"), require => Package['cinder'], notify => Service['cinder-volume'] } cinder_config { \"USD{name}/volume_backend_name\": value => USDvolume_backend_name; \"USD{name}/volume_driver\": value => 'cinder.volume.drivers.nfs.NfsDriver'; \"USD{name}/nfs_shares_config\": value => USDnfs_shares_config; \"USD{name}/nfs_mount_options\": value => USDnfs_mount_options; \"USD{name}/nfs_disk_util\": value => USDnfs_disk_util; \"USD{name}/nfs_sparsed_volumes\": value => USDnfs_sparsed_volumes; \"USD{name}/nfs_mount_point_base\": value => USDnfs_mount_point_base; \"USD{name}/nfs_used_ratio\": value => USDnfs_used_ratio; \"USD{name}/nfs_oversub_ratio\": value => USDnfs_oversub_ratio; } create_resources('cinder_config', USDextra_options) }",
"\"USD{name}/volume_backend_name\": value => USDvolume_backend_name; \"USD{name}/volume_driver\": value => 'cinder.volume.drivers.nfs.NfsDriver'; \"USD{name}/nfs_shares_config\": value => USDnfs_shares_config;",
"[mynfs] volume_backend_name=mynfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_shares_config=/etc/cinder/shares.conf",
"USDnfs_mount_options = undef,",
"cinder::backend::nfs { USDcinder_nfs_backend: nfs_mount_options => hiera('cinder_nfs_mount_options'), }",
"cinder_nfs_mount_options: rsize=8192,wsize=8192",
"cinder::backend::nfs::nfs_mount_options: rsize=8192,wsize=8192"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/partner_integration/Configuration |
4.2. Tuning CPUs with Tuna | 4.2. Tuning CPUs with Tuna Tuna commands can target individual CPUs. To list the CPUs on your system, see the Monitoring tab in Tuna GUI or the /proc/cpuinfo file for detailed information. To specify the list of CPUs to be affected by your command, use: Isolating a CPU causes all tasks currently running on that CPU to move to the available CPU. To isolate a CPU, use: Including a CPU allows threads to run on the specified CPU. To include a CPU, use: The cpu_list argument is a list of comma-separated CPU numbers. For example, --cpus=0,2 . | [
"tuna --cpus= cpu_list --run= COMMAND",
"tuna --cpus= cpu_list --isolate",
"tuna --cpus= cpu_list --include"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sec-tuna-cpu-tuning |
Chapter 1. Installing Red Hat Developer Hub on OpenShift Dedicated on GCP using the Operator | Chapter 1. Installing Red Hat Developer Hub on OpenShift Dedicated on GCP using the Operator You can install Developer Hub on OpenShift Dedicated on GCP using the Red Hat Developer Hub Operator. Prerequisites You have a valid GCP account. Your OpenShift Dedicated cluster is running on GCP. For more information, see Creating a cluster on GCP in Red Hat OpenShift Dedicated documentation. You have administrator access to OpenShift Dedicated cluster and GCP project. Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Operators > OperatorHub . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub Operator card. On the Red Hat Developer Hub Operator page, click Install . In the OpenShift Container Platform console, navigate to Installed Operators and select Red Hat Developer Hub Operator . From the Developer Hub Operator page, click Create New Instance and specify the name and namespace where you want to deploy Developer Hub. Configure the required settings such as Git integration, secret management, and user permissions. Review the configuration, select deployment options, and click Create . Verification To access the Developer Hub, navigate to the Developer Hub URL provided in the OpenShift Container Platform web console. Additional resources Administration guide for Red Hat Developer Hub | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_openshift_dedicated_on_google_cloud_platform/proc-install-rhdh-osd-gcp-operator_title-install-rhdh-osd-gcp |
Chapter 12. Distribution Options | Chapter 12. Distribution Options Red Hat Enterprise Linux offers three methods of distribution for third-party applications. RPM Packages RPM Packages are the traditional method of distributing and installing software. RPM Packages are a mature technology with multiple tools and widely disseminated knowledge. Applications are installed as part of the system. The installation tools greatly assist in resolving dependencies. Note Only one version of a package can be installed, making multiple application version installations difficult. To create an RPM package, follow the instructions in the RPM Packaging Guide, Packaging Software . Software Collections A Software Collection is a specially prepared RPM package for an alternative version of an application. A Software Collection is a packaging method used and supported by Red Hat. It is built on top of the RPM package mechanism. Multiple versions of an application can be installed at once. For more information, see Red Hat Software Collections Packaging Guide, What Are Software Collections? To create a software collection package, follow the instructions in the Red Hat Software Collections Packaging Guide, Packaging Software Collections . Containers Docker-formatted containers are a lightweight virtualization method. Applications can be present in multiple independent versions and instances. They can be prepared easily from an RPM package or Software Collection. Interaction with the system can be precisely controlled. Isolation of the application increases security. Containerizing applications or their components enables the orchestration of multiple instances. Additional Resources Red Hat Software Collections Packaging Guide - What Are Software Collections? | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/packaging_understanding-distribution-options |
Appendix A. Tools and tips for troubleshooting and bug reporting | Appendix A. Tools and tips for troubleshooting and bug reporting The troubleshooting information in the following sections might be helpful when diagnosing issues at the start of the installation process. The following sections are for all supported architectures. However, if an issue is for a particular architecture, it is specified at the start of the section. A.1. Dracut Dracut is a tool that manages the initramfs image during the Linux operating system boot process. The dracut emergency shell is an interactive mode that can be initiated while the initramfs image is loaded. You can run basic troubleshooting commands from the dracut emergency shell. For more information, see the Troubleshooting section of the dracut man page on your system. A.2. Using installation log files For debugging purposes, the installation program logs installation actions in files that are located in the /tmp directory. These log files are listed in the following table. Table A.1. Log files generated during the installation Log file Contents /tmp/anaconda.log General messages. /tmp/program.log All external programs run during the installation. /tmp/storage.log Extensive storage module information. /tmp/packaging.log dnf and rpm package installation messages. /tmp/dbus.log Information about the dbus session that is used for installation program modules. /tmp/sensitive-info.log Configuration information that is not part of other logs and not copied to the installed system. /tmp/syslog Hardware-related system messages. This file contains messages from other Anaconda files. If the installation fails, the messages are consolidated into /tmp/anaconda-tb-identifier , where identifier is a random string. After a successful installation, these files are copied to the installed system under the directory /var/log/anaconda/ . However, if the installation is unsuccessful, or if the inst.nosave=all or inst.nosave=logs options are used when booting the installation system, these logs only exist in the installation program's RAM disk. This means that the logs are not saved permanently and are lost when the system is powered down. To store them permanently, copy the files to another system on the network or copy them to a mounted storage device such as a USB flash drive. A.2.1. Creating pre-installation log files Use this procedure to set the inst.debug option to create log files before the installation process starts. These log files contain, for example, the current storage configuration. Prerequisites The Red Hat Enterprise Linux boot menu is open. Procedure Select the Install Red Hat Enterprise Linux option from the boot menu. Press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected boot options. Append inst.debug to the options. For example: Press the Enter key on your keyboard. The system stores the pre-installation log files in the /tmp/pre-anaconda-logs/ directory before the installation program starts. To access the log files, switch to the console. Change to the /tmp/pre-anaconda-logs/ directory: Additional resources Boot options reference Console logging during installation A.2.2. Transferring installation log files to a USB drive Use this procedure to transfer installation log files to a USB drive. Prerequisites You have backed up data from the USB drive. You are logged into a root account and you have access to the installation program's temporary file system. Procedure Press Ctrl + Alt + F2 to access a shell prompt on the system you are installing. Connect a USB flash drive to the system and run the dmesg command: A log detailing all recent events is displayed. At the end of this log, a set of messages is displayed. For example: Note the name of the connected device. In the above example, it is sdb . Navigate to the /mnt directory and create a new directory that serves as the mount target for the USB drive. This example uses the name usb : Mount the USB flash drive onto the newly created directory. In most cases, you do not want to mount the whole drive, but a partition on it. Do not use the name sdb , use the name of the partition you want to write the log files to. In this example, the name sdb1 is used: Verify that you mounted the correct device and partition by accessing it and listing its contents: Copy the log files to the mounted device. Unmount the USB flash drive. If you receive an error message that the target is busy, change your working directory to outside the mount (for example, /). A.2.3. Transferring installation log files over the network Use this procedure to transfer installation log files over the network. Prerequisites You are logged into a root account and you have access to the installation program's temporary file system. Procedure Press Ctrl + Alt + F2 to access a shell prompt on the system you are installing. Switch to the /tmp directory where the log files are located: Copy the log files onto another system on the network using the scp command: Replace user with a valid user name on the target system, address with the target system's address or host name, and path with the path to the directory where you want to save the log files. For example, if you want to log in as john on a system with an IP address of 192.168.0.122 and place the log files into the /home/john/logs/ directory on that system, the command is as follows: When connecting to the target system for the first time, the SSH client asks you to confirm that the fingerprint of the remote system is correct and that you want to continue: Type yes and press Enter to continue. Provide a valid password when prompted. The files are transferred to the specified directory on the target system. A.3. Detecting memory faults using the Memtest86 application Faults in memory (RAM) modules can cause your system to fail unpredictably. In certain situations, memory faults might only cause errors with particular combinations of software. For this reason, you should test your system's memory before you install Red Hat Enterprise Linux. Red Hat Enterprise Linux includes the Memtest86+ memory testing application for BIOS systems only. Support for UEFI systems is currently unavailable. A.3.1. Running Memtest86 Use this procedure to run the Memtest86 application to test your system's memory for faults before you install Red Hat Enterprise Linux. Prerequisites You have accessed the Red Hat Enterprise Linux boot menu. Procedure From the Red Hat Enterprise Linux boot menu, select Troubleshooting > Run a memory test . The Memtest86 application window is displayed and testing begins immediately. By default, Memtest86 performs ten tests in every pass. After the first pass is complete, a message is displayed in the lower part of the window informing you of the current status. Another pass starts automatically. If Memtest86+ detects an error, the error is displayed in the central pane of the window and is highlighted in red. The message includes detailed information such as which test detected a problem, the memory location that is failing, and others. In most cases, a single successful pass of all 10 tests is sufficient to verify that your RAM is in good condition. In rare circumstances, however, errors that went undetected during the first pass might appear on subsequent passes. To perform a thorough test on important systems, run the tests overnight or for a few days to complete multiple passes. The amount of time it takes to complete a single full pass of Memtest86+ varies depending on your system's configuration, notably the RAM size and speed. For example, on a system with 2 GiB of DDR2 memory at 667 MHz, a single pass takes 20 minutes to complete. Optional: Follow the on-screen instructions to access the Configuration window and specify a different configuration. To halt the tests and reboot your computer, press the Esc key at any time. Additional resources How to use Memtest86 A.4. Verifying boot media Verifying ISO images helps to avoid problems that are sometimes encountered during installation. These sources include DVD and ISO images stored on a disk or NFS server. Use this procedure to test the integrity of an ISO-based installation source before using it to install Red Hat Enterprise Linux. Prerequisites You have accessed the Red Hat Enterprise Linux boot menu. Procedure From the boot menu, select Test this media & install Red Hat Enterprise Linux 9 to test the boot media. The boot process tests the media and highlights any issues. Optional: You can start the verification process by appending rd.live.check to the boot command line. A.5. Consoles and logging during installation The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows in addition to the main interface. Each of these windows serve a different purpose; they display several different logs, which can be used to troubleshoot issues during the installation process. One of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. The terminal multiplexer is running in virtual console 1. To switch from the actual installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . During the text mode installation, start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has five available windows; their contents are described in the following table, along with keyboard shortcuts. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n , Alt+ Tab , and Ctrl + b p to switch to the or tmux window, respectively. Table A.2. Available tmux windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC direct mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related to storage devices and configuration, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from utilities executed during the installation process, stored in /tmp/program.log . A.6. Saving screenshots You can press Shift + Print Screen at any time during the graphical installation to capture the current screen. The screenshots are saved to /tmp/anaconda-screenshots . A.7. Display settings and device drivers Some video cards have trouble booting into the Red Hat Enterprise Linux graphical installation program. If the installation program does not run using its default settings, it attempts to run in a lower resolution mode. If that fails, the installation program attempts to run in text mode. There are several possible solutions to resolve display issues, most of which involve specifying custom boot options: For more information, see Console boot options . Table A.3. Solutions Solution Description Use the text mode You can attempt to perform the installation using the text mode. For details, refer to Installing RHEL in text mode . Specify the display resolution manually If the installation program fails to detect your screen resolution, you can override the automatic detection and specify it manually. To do this, append the inst.resolution=x option at the boot menu, where x is your display's resolution, for example, 1024x768. Use an alternate video driver You can attempt to specify a custom video driver, overriding the installation program's automatic detection. To specify a driver, use the inst.xdriver=x option, where x is the device driver you want to use (for example, nouveau)*. Perform the installation using VNC If the above options fail, you can use a separate system to access the graphical installation over the network, using the Virtual Network Computing (VNC) protocol. For details on installing using VNC, see Preparing a remote installation by using VNC . If specifying a custom video driver solves your problem, you should report it as a bug in Jira . The installation program should be able to detect your hardware automatically and use the appropriate driver without intervention. | [
"vmlinuz ... inst.debug",
"cd /tmp/pre-anaconda-logs/",
"dmesg",
"[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk",
"mkdir usb",
"mount /dev/sdb1 /mnt/usb",
"cd /mnt/usb",
"ls",
"cp /tmp/*log /mnt/usb",
"umount /mnt/usb",
"cd /tmp",
"scp *log user@address:path",
"scp *log [email protected]:/home/john/logs/",
"The authenticity of host '192.168.0.122 (192.168.0.122)' can't be established. ECDSA key fingerprint is a4:60:76:eb:b2:d0:aa:23:af:3d:59:5c:de:bb:c4:42. Are you sure you want to continue connecting (yes/no)?"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/troubleshooting-at-the-start-of-the-installation_rhel-installer |
Updating | Updating Red Hat Enterprise Linux AI 1.3 Upgrading your RHEL AI system and models Red Hat RHEL AI Documentation Team | [
"sudo podman login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json",
"sudo skopeo login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json",
"sudo bootc switch <latest-rhelai-image>",
"sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.3",
"sudo reboot -n",
"sudo bootc upgrade --apply",
"ilab config init",
"ilab model download --repository <repository_and_model> --release latest",
"ilab model list"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html-single/updating/index |
Using jlink to customize Java runtime environment | Using jlink to customize Java runtime environment Red Hat build of OpenJDK 21 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_jlink_to_customize_java_runtime_environment/index |
Chapter 3. Pipelines | Chapter 3. Pipelines 3.1. About Red Hat OpenShift Pipelines Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions. Note Because Red Hat OpenShift Pipelines releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift Pipelines documentation is now available as separate documentation sets for each minor version of the product. The Red Hat OpenShift Pipelines documentation is available at https://docs.openshift.com/pipelines/ . Documentation for specific versions is available using the version selector drop-down list, or directly by adding the version to the URL, for example, https://docs.openshift.com/pipelines/1.11 . In addition, the Red Hat OpenShift Pipelines documentation is also available on the Red Hat Customer Portal at https://access.redhat.com/documentation/en-us/red_hat_openshift_pipelines/ . For additional information about the Red Hat OpenShift Pipelines life cycle and supported platforms, refer to the Platform Life Cycle Policy . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/cicd/pipelines |
C.3. Creating Encrypted Block Devices in Anaconda | C.3. Creating Encrypted Block Devices in Anaconda You can create encrypted devices during system installation. This allows you to easily configure a system with encrypted partitions. To enable block device encryption, check the "Encrypt System" checkbox when selecting automatic partitioning or the "Encrypt" checkbox when creating an individual partition, software RAID array, or logical volume. After you finish partitioning, you will be prompted for an encryption passphrase. This passphrase will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a checkbox. Checking this checkbox indicates that you would like the new passphrase to be added to an available slot in each of the pre-existing encrypted block devices. Note Checking the "Encrypt System" checkbox on the "Automatic Partitioning" screen and then choosing "Create custom layout" does not cause any block devices to be encrypted automatically. Note You can use kickstart to set a separate passphrase for each new encrypted block device. C.3.1. What Kinds of Block Devices Can Be Encrypted? Most types of block devices can be encrypted using LUKS. From anaconda you can encrypt partitions, LVM physical volumes, LVM logical volumes, and software RAID arrays. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/apcs03 |
7.20. ccid | 7.20. ccid 7.20.1. RHSA-2013:0523 - Low: ccid security and bug fix update An updated ccid package that fixes one security issue and one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Chip/Smart Card Interface Devices (CCID) is a USB smart card reader standard followed by most modern smart card readers. The ccid package provides a Generic, USB-based CCID driver for readers, which follow this standard. Security Fix CVE-2010-4530 An integer overflow, leading to an array index error, was found in the way the CCID driver processed a smart card's serial number. A local attacker could use this flaw to execute arbitrary code with the privileges of the user running the PC/SC Lite pcscd daemon (root, by default), by inserting a specially-crafted smart card. Bug Fix BZ#808115 Previously, CCID only recognized smart cards with 5V power supply. With this update, CCID also supports smart cards with different power supply. All users of ccid are advised to upgrade to this updated package, which contains backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ccid |
A. Revision History | A. Revision History Revision History Revision 1-4 Wed Feb 25 2015 Laura Bailey Rebuild for sort order. Revision 1-3.7 Wed Jan 22 2014 Eliska Slobodova Added the missing eCryptfs Technology Preview. Revision 1-3.5 Mon Jun 17 2013 Eliska Slobodova Fixed broken links and links pointing to the old Product Documentation site. Revision 1-3.3 Thu Dec 13 2012 Martin Prpic Added a note about removal of multilib Python packages. Revision 1-3.1 Wed May 20 2012 Martin Prpic Republished Technical Notes to update list of included advisories. For more information, refer to the Important note in the Package Updates chapter of this book. Revision 1-2 Mon May 23 2011 Ryan Lerch Updated Technology Previews section, Removed references to 'Beta' and Updated the XFS on High Availability Technology Preview note. Revision 1-1 Thu May 19 2011 Ryan Lerch Initial Release of the Red Hat Enterprise Linux 6.1 Technical Notes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/appe-technical_notes-revision_history |
Chapter 1. Introduction to undercloud and control plane back up and restore | Chapter 1. Introduction to undercloud and control plane back up and restore The Undercloud and Control Plane Back Up and Restore procedure provides steps for backing up the state of the Red Hat OpenStack Platform 16.0 undercloud and overcloud Controller nodes, hereinafter referred to as control plane nodes, before updates and upgrades. Use the procedure to restore the undercloud and the overcloud control plane nodes to their state if an error occurs during an update or upgrade. 1.1. Background The Undercloud and Control Plane Back Up and Restore procedure uses the open source Relax and Recover (ReaR) disaster recovery solution, written in Bash. ReaR creates a bootable image consisting of the latest state of an undercloud or a Control Plane node. ReaR also enables a system administrator to select files for backup. ReaR supports numerous boot media formats, including: ISO USB eSATA PXE The examples in this document were tested using the ISO boot format. ReaR can transport the boot images using multiple protocols, including: HTTP/HTTPS SSH/SCP FTP/SFTP NFS CIFS (SMB) For the purposes of backing up and restoring the Red Hat OpenStack Platform 16.0 undercloud and overcloud Control Plane nodes, the examples in this document were tested using NFS. 1.2. Backup management options ReaR can use both internal and external backup management options. Internal backup management Internal backup options include: tar rsync External backup management External backup management options include both open source and proprietary solutions. Open source solutions include: Bacula Bareos Proprietary solutions include: EMC NetWorker (Legato) HP DataProtector IBM Tivoli Storage Manager (TSM) Symantec NetBackup | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/undercloud_and_control_plane_back_up_and_restore/introduction-osp-ctlplane-br |
10.4.3. Problems with the X Window System (GUI) | 10.4.3. Problems with the X Window System (GUI) If you are having trouble getting X (the X Window System) to start, you may not have installed it during your installation. If you want X, you can either install the packages from the Red Hat Enterprise Linux installation media or perform an upgrade. If you elect to upgrade, select the X Window System packages, and choose GNOME, KDE, or both, during the upgrade package selection process. Refer to Section 35.3, "Switching to a Graphical Login" for more detail on installing a desktop environment. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch10s04s03 |
4.171. man-pages-ja | 4.171. man-pages-ja 4.171.1. RHBA-2011:0962 - man-pages-ja bug fix update An updated man-pages-ja package that fixes multiple bugs is now available for Red Hat Enterprise Linux 6. The man-pages-ja package contains Japanese translations of the Linux Documentation Project man pages. Bug Fixes BZ# 579641 Prior to this update, the man-pages-ja package did not contain the Japanese translations of the man pages of the "halt", "init", "poweroff", "reboot", "runlevel", "shutdown", and "telinit" commands. With this update, the aforementioned man page translations have been added. BZ# 682122 Prior to this update, the Japanese translation of the getpriority(2) man page contained a typo in the range of "nice values". This update corrects the typo. BZ# 699301 Prior to this update, the Japanese translation of the wall(1) man page contained a typo in the description of the message length limit. This update corrects the typo. BZ# 710704 Prior to this update, the Japanese translation of the tar(1) man page did not contain descriptions of the "--selinux" and "--no-selinux" options. With this update, the missing descriptions have been added. All users of man-pages-ja are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/man-pages-ja |
Chapter 262. PostgresSQL Event Component | Chapter 262. PostgresSQL Event Component Available as of Camel version 2.15 This is a component for Apache Camel which allows for Producing/Consuming PostgreSQL events related to the LISTEN/NOTIFY commands added since PostgreSQL 8.3. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-pgevent</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> URI format The pgevent component uses the following two styles of endpoint URI notation: pgevent:datasource[?parameters] pgevent://host:port/database/channel[?parameters] You can append query options to the URI in the following format, ?option=value&option=value&... 262.1. Options The PostgresSQL Event component has no options. The PostgresSQL Event endpoint is configured using URI syntax: with the following path and query parameters: 262.1.1. Path Parameters (4 parameters): Name Description Default Type host To connect using hostname and port to the database. localhost String port To connect using hostname and port to the database. 5432 Integer database Required The database name String channel Required The channel name String 262.1.2. Query Parameters (7 parameters): Name Description Default Type datasource (common) To connect using the given javax.sql.DataSource instead of using hostname and port. DataSource bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean pass (security) Password for login String user (security) Username for login postgres String 262.2. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.pgevent.enabled Enable pgevent component true Boolean camel.component.pgevent.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 262.3. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-pgevent</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"pgevent:datasource[?parameters] pgevent://host:port/database/channel[?parameters]",
"pgevent:host:port/database/channel"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/pgevent-component |
4.3. Embedded and Associated Objects | 4.3. Embedded and Associated Objects Associated objects and embedded objects can be indexed as part of the root entity index. This allows searches of an entity based on properties of associated objects. Report a bug 4.3.1. Indexing Associated Objects The aim of the following example is to return places where the associated city is Atlanta via the Lucene query address.city:Atlanta . The place fields are indexed in the Place index. The Place index documents also contain the following fields: address.street address.city These fields are also able to be queried. Example 4.4. Indexing associations Report a bug 4.3.2. @IndexedEmbedded When using the @IndexedEmbedded technique, data is denormalized in the Lucene index. As a result, the Lucene-based Query API must be updated with any changes in the Place and Address objects to keep the index up to date. Ensure the Place Lucene document is updated when its Address changes by marking the other side of the bidirectional relationship with @ContainedIn . @ContainedIn can be used for both associations pointing to entities and on embedded objects. The @IndexedEmbedded annotation can be nested. Attributes can be annotated with @IndexedEmbedded . The attributes of the associated class are then added to the main entity index. In the following example, the index will contain the following fields: name address.street address.city address.ownedBy_name Example 4.5. Nested usage of @IndexedEmbedded and @ContainedIn The default prefix is propertyName , following the traditional object navigation convention. This can be overridden using the prefix attribute as it is shown on the ownedBy property. Note The prefix cannot be set to the empty string. The depth property is used when the object graph contains a cyclic dependency of classes. For example, if Owner points to Place . the Query Module stops including attributes after reaching the expected depth, or object graph boundaries. A self-referential class is an example of cyclic dependency. In the provided example, because depth is set to 1, any @IndexedEmbedded attribute in Owner is ignored. Using @IndexedEmbedded for object associations allows queries to be expressed using Lucene's query syntax. For example: Return places where name contains JBoss and where address city is Atlanta. In Lucene query this is: Return places where name contains JBoss and where owner's name contain Joe. In Lucene query this is: This operation is similar to the relational join operation, without data duplication. Out of the box, Lucene indexes have no notion of association; the join operation does not exist. It may be beneficial to maintain the normalized relational model while benefiting from the full text index speed and feature richness. An associated object can be also be @Indexed . When @IndexedEmbedded points to an entity, the association must be directional and the other side must be annotated using @ContainedIn . If not, the Lucene-based Query API cannot update the root index when the associated entity is updated. In the provided example, a Place index document is updated when the associated Address instance updates. Report a bug 4.3.3. The targetElement Property It is possible to override the object type targeted using the targetElement parameter. This method can be used when the object type annotated by @IndexedEmbedded is not the object type targeted by the data grid and the Lucene-based Query API. This occurs when interfaces are used instead of their implementation. Example 4.6. Using the targetElement property of @IndexedEmbedded Report a bug | [
"@Indexed public class Place { @Field private String name; @IndexedEmbedded @ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private Address address; } public class Address { @Field private String street; @Field private String city; @ContainedIn @OneToMany(mappedBy = \"address\") private Set<Place> places; }",
"@Indexed public class Place { @Field private String name; @IndexedEmbedded @ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private Address address; } public class Address { @Field private String street; @Field private String city; @IndexedEmbedded(depth = 1, prefix = \"ownedBy_\") private Owner ownedBy; @ContainedIn @OneToMany(mappedBy = \"address\") private Set<Place> places; } public class Owner { @Field private String name; }",
"+name:jboss +address.city:atlanta",
"+name:jboss +address.ownedBy_name:joe",
"@Indexed public class Address { @Field private String street; @IndexedEmbedded(depth = 1, prefix = \"ownedBy_\", targetElement = Owner.class) private Person ownedBy; } public class Owner implements Person { ... }"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-embedded_and_associated_objects |
3.3. Hardware Requirements | 3.3. Hardware Requirements For a list of minimum hardware requirements of Red Hat Enterprise Linux 6, see the Red Hat Enterprise Linux technology capabilities and limits page. Also note that the minimum memory requirements listed on that page assume that you create a swap space based on the recommendations in Section 9.15.5, "Recommended Partitioning Scheme" . Systems with low memory (1 GB and less) and less than the recommended amount of swap space may have issues ranging from low responsivity up to and including complete inability to boot after the installation. For installation of Red Hat Enterprise Linux on x86, AMD64, and Intel 64 systems, Red Hat supports the following installation targets: Hard drives connected by a standard internal interface, such as SCSI, SATA, or SAS BIOS/firmware RAID devices Fibre Channel Host Bus Adapters and multipath devices are also supported. Vendor-provided drivers may be required for certain hardware. Red Hat does not support installation to USB drives or SD memory cards. Red Hat also supports installations that use the following virtualization technologies: Xen block devices on Intel processors in Xen virtual machines. VirtIO block devices on Intel processors in KVM virtual machines. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-Supported_Installation_Hardware-x86 |
Chapter 2. Installing a cluster on IBM Power | Chapter 2. Installing a cluster on IBM Power In OpenShift Container Platform version 4.14, you can install a cluster on IBM Power(R) infrastructure that you provision. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.14 on the following IBM(R) hardware: IBM Power(R)9 or IBM Power(R)10 processor-based systems Note Support for RHCOS functionality for all IBM Power(R)8 models, IBM Power(R) AC922, IBM Power(R) IC922, and IBM Power(R) LC922 is deprecated in OpenShift Container Platform 4.14. Red Hat recommends that you use later hardware models. Hardware requirements Six logical partitions (LPARs) across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or Power10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Recommended IBM Power system requirements Hardware requirements Six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or IBM Power(R)10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) 2.9.1. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.13. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.14. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.16. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.17. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.18. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 2.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.12.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none 2.12.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform version 4.14, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.16.1.1. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power/installing-ibm-power |
Creating the Developer Portal | Creating the Developer Portal Red Hat 3scale API Management 2.15 A good developer portal is a must have to assure adoption of your API. Create yours in no time. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/index |
Chapter 54. CertificateAuthority schema reference | Chapter 54. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Property type Description generateCertificateAuthority boolean If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. generateSecretOwnerReference boolean If true , the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true , the CA Secrets are also deleted. If false , the ownerReference is disabled. If the Kafka resource is deleted when false , the CA Secrets are retained and available for reuse. Default is true . validityDays integer The number of days generated certificates should be valid for. The default is 365. renewalDays integer The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. certificateExpirationPolicy string (one of [replace-key, renew-certificate]) How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-CertificateAuthority-reference |
Chapter 33. About Red Hat Process Automation Manager | Chapter 33. About Red Hat Process Automation Manager Red Hat Process Automation Manager is the Red Hat middleware platform for creating business automation applications and microservices. It enables enterprise business and IT users to document, simulate, manage, automate, and monitor business processes and policies. It is designed to empower business and IT users to collaborate more effectively, so business applications can be changed easily and quickly. The product is made up of Business Central and KIE Server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. KIE Server provides the runtime environment for business assets and accesses the data stored in the assets repository (knowledge store). Business Central is the graphical user interface where you create and manage business rules that KIE Server executes. It enables you to perform the following tasks: Create, manage, and edit your rules, processes, and related assets. Manage connected KIE Server instances and their KIE containers (deployment units). Execute runtime operations against processes and tasks in KIE Server instances connected to Business Central. Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without needing to deploy it to an application server. Red Hat JBoss Web Server is an enterprise ready web server designed for medium and large applications, based on Tomcat. Red Hat JBoss Web Server provides organizations with a single deployment platform for Java Server Pages (JSP) and Java Servlet technologies, PHP, and CGI. On a Red Hat JBoss Web Server installation, you can install KIE Server and the headless Process Automation Manager controller. Alternatively, you can run the standalone Business Central JAR file. The instructions in this document explain how to install Red Hat Process Automation Manager in a Red Hat JBoss Web Server instance. For instructions on how to install Red Hat Process Automation Manager in other environments, see the following documents: Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 Installing and configuring KIE Server on IBM WebSphere Application Server Installing and configuring KIE Server on Oracle WebLogic Server Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 3 using templates For information about supported components, see the following documents: What is the mapping between Red Hat Process Automation Manager and the Maven library version? Red Hat Process Automation Manager 7 Supported Configurations 33.1. Red Hat Process Automation Manager components The product is made up of Business Central and KIE Server. Business Central is the graphical user interface where you create and manage business rules. You can install Business Central in a Red Hat JBoss EAP instance or on the Red Hat OpenShift Container Platform (OpenShift). Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. You can install KIE Server in a Red Hat JBoss EAP instance, in a Red Hat JBoss EAP cluster, on OpenShift, in an Oracle WebLogic server instance, in an IBM WebSphere Application Server instance, or as a part of Spring Boot application. You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). A KIE container is a specific version of a project. If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers. On a Red Hat JBoss Web Server installation, you can install KIE Server and the headless Process Automation Manager controller. Alternatively, you can run the standalone Business Central JAR file. 33.2. Red Hat Process Automation Manager roles and users To access Business Central or KIE Server, you must create users and assign them appropriate roles before the servers are started. You can create users and roles when you install Business Central or KIE Server. If both Business Central and KIE Server are running on a single instance, a user who is authenticated for Business Central can also access KIE Server. However, if Business Central and KIE Server are running on different instances, a user who is authenticated for Business Central must be authenticated separately to access KIE Server. For example, if a user who is authenticated on Business Central but not authenticated on KIE Server tries to view or manage process definitions in Business Central, a 401 error is logged in the log file and the Invalid credentials to load data from remote server. Contact your system administrator. message appears in Business Central. This section describes Red Hat Process Automation Manager user roles. Note The admin , analyst , developer , manager , process-admin , user , and rest-all roles are reserved for Business Central. The kie-server role is reserved for KIE Server. For this reason, the available roles can differ depending on whether Business Central, KIE Server, or both are installed. admin : Users with the admin role are the Business Central administrators. They can manage users and create, clone, and manage repositories. They have full access to make required changes in the application. Users with the admin role have access to all areas within Red Hat Process Automation Manager. analyst : Users with the analyst role have access to all high-level features. They can model and execute their projects. However, these users cannot add contributors to spaces or delete spaces in the Design Projects view. Access to the Deploy Execution Servers view, which is intended for administrators, is not available to users with the analyst role. However, the Deploy button is available to these users when they access the Library perspective. developer : Users with the developer role have access to almost all features and can manage rules, models, process flows, forms, and dashboards. They can manage the asset repository, they can create, build, and deploy projects. Only certain administrative functions such as creating and cloning a new repository are hidden from users with the developer role. manager : Users with the manager role can view reports. These users are usually interested in statistics about the business processes and their performance, business indicators, and other business-related reporting. A user with this role has access only to process and task reports. process-admin : Users with the process-admin role are business process administrators. They have full access to business processes, business tasks, and execution errors. These users can also view business reports and have access to the Task Inbox list. user : Users with the user role can work on the Task Inbox list, which contains business tasks that are part of currently running processes. Users with this role can view process and task reports and manage processes. rest-all : Users with the rest-all role can access Business Central REST capabilities. kie-server : Users with the kie-server role can access KIE Server REST capabilities. This role is mandatory for users to have access to Manage and Track views in Business Central. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/installing-con_install-on-jws |
Chapter 6. Datasource and Resource Adapter Tuning | Chapter 6. Datasource and Resource Adapter Tuning Connection pools are the principal tool that JBoss EAP uses to optimize performance for environments that use datasources, such as relational databases, or resource adapters. Allocating and deallocating resources for datasource and resource adapter connections is very expensive in terms of time and system resources. Connection pooling reduces the cost of connections by creating a 'pool' of connections that are available to applications. Before configuring your connection pool for optimal performance, you must monitor the datasource pool statistics or resource adapter statistics under load to determine the appropriate settings for your environment. 6.1. Monitoring Pool Statistics 6.1.1. Datasource Statistics When statistics collection is enabled for a datasource, you can view runtime statistics for the datasource. 6.1.1.1. Enabling Datasource Statistics By default, datasource statistics are not enabled. You can enable datasource statistics collection using the management CLI or the management console . Enable Datasource Statistics Using the Management CLI The following management CLI command enables the collection of statistics for the ExampleDS datasource. Note In a managed domain, precede this command with /profile= PROFILE_NAME . Reload the server for the changes to take effect. Enable Datasource Statistics Using the Management Console Use the following steps to enable statistics collection for a datasource using the management console. Navigate to datasources in standalone or domain mode. Use the following navigation in the standalone mode: Configuration Subsystems Datasources & Drivers Datasources Use the following navigation in the domain mode: Configuration Profiles full Datasources & Drivers Datasources Select the datasource and click View . Click Edit under the Attributes tab. Set the Statistics Enabled field to ON and click Save . A popup appears indicating that the changes require a reload in order to take effect. Reload the server. For a standalone server, click the Reload link from the popup to reload the server. For a managed domain, click the Topology link from the popup. From the Topology tab, select the appropriate server and select the Reload drop down option to reload the server. 6.1.1.2. Viewing Datasource Statistics You can view runtime statistics for a datasource using the management CLI or management console . View Datasource Statistics Using the Management CLI The following management CLI command retrieves the core pool statistics for the ExampleDS datasource. Note In a managed domain, precede these commands with /host= HOST_NAME /server= SERVER_NAME . The following management CLI command retrieves the JDBC statistics for the ExampleDS datasource. Note Since statistics are runtime information, be sure to specify the include-runtime=true argument. See Datasource Statistics for a detailed list of all available statistics. View Datasource Statistics Using the Management Console To view datasource statistics from the management console, navigate to the Datasources subsystem from the Runtime tab, select a datasource, and click View . See Datasource Statistics for a detailed list of all available statistics. 6.1.2. Resource Adapter Statistics You can view core runtime statistics for deployed resource adapters. See the Resource Adapter Statistics appendix for a detailed list of all available statistics. Enable Resource Adapter Statistics By default, resource adapter statistics are not enabled. The following management CLI command enables the collection of statistics for a simple resource adapter myRA.rar with a connection factory bound in JNDI as java:/eis/AcmeConnectionFactory : Note In a managed domain, precede the command with /host= HOST_NAME /server= SERVER_NAME / . View Resource Adapter Statistics Resource adapter statistics can be retrieved from the management CLI. The following management CLI command returns statistics for the resource adapter myRA.rar with a connection factory bound in JNDI as java:/eis/AcmeConnectionFactory . Note In a managed domain, precede the command with /host= HOST_NAME /server= SERVER_NAME / . Note Since statistics are runtime information, be sure to specify the include-runtime=true argument. 6.2. Pool Attributes This section details advice for selected pool attributes that can be configured for optimal datasource or resource adapter performance. For instructions on how to configure each of these attributes, see: Configuring Datasource Pool Attributes Configuring Resource Adapter Pool Attributes Minimum Pool Size The min-pool-size attribute defines the minimum size of the connection pool. The default minimum is zero connections. With a zero min-pool-size , connections are created and placed in the pool when the first transactions occur. If min-pool-size is too small, it results in increased latency while executing initial database commands because new connections might need to be established. If min-pool-size is too large, it results in wasted connections to the datasource or resource adapter. During periods of inactivity the connection pool will shrink, possibly to the min-pool-size value. Red Hat recommends that you set min-pool-size to the number of connections that allow for ideal on-demand throughput for your applications. Maximum Pool Size The max-pool-size attribute defines the maximum size of the connection pool. It is an important performance parameter because it limits the number of active connections, and thus also limits the amount of concurrent activity. If max-pool-size is too small, it can result in requests being unnecessarily blocked. If max-pool-size is too large, it can result in your JBoss EAP environment, datasource, or resource adapter using more resources than it can handle. Red Hat recommends that you set the max-pool-size to at least 15% higher than an acceptable MaxUsedCount observed after monitoring performance under load. This allows some buffer for higher than expected conditions. Prefill The pool-prefill attribute specifies whether JBoss EAP will prefill the connection pool with the minimum number of connections when JBoss EAP starts. The default value is false . When pool-prefill is set to true , JBoss EAP uses more resources at startup, but there will be less latency for initial transactions. Red Hat recommends to set pool-prefill to true if you have optimized the min-pool-size . Strict Minimum The pool-use-strict-min attribute specifies whether JBoss EAP allows the number of connections in the pool to fall below the specified minimum. If pool-use-strict-min is set to true , JBoss EAP will not allow the number of connections to temporarily fall below the specified minimum. The default value is false . Although a minimum number of pool connections is specified, when JBoss EAP closes connections, for instance, if the connection is idle and has reached the timeout, the closure may cause the total number of connections to temporarily fall below the minimum before a new connection is created and added to the pool. Timeouts There are a number of timeout options that are configurable for a connection pool, but a significant one for performance tuning is idle-timeout-minutes . The idle-timeout-minutes attribute specifies the maximum time, in minutes, a connection may be idle before being closed. As idle connections are closed, the number of connections in the pool will shrink down to the specified minimum. The longer the timeout, the more resources are used but requests might be served faster. The lower the timeout, the less resources are used but requests might need to wait for a new connection to be created. 6.3. Configuring Pool Attributes 6.3.1. Configuring Datasource Pool Attributes Prerequisites Install a JDBC driver. See JDBC Drivers in the JBoss EAP Configuration Guide . Create a datasource. See Creating Datasources in the JBoss EAP Configuration Guide . You can configure datasource pool attributes using either the management CLI or the management console: To use the management console, navigate to Configuration Subsystems Datasources & Drivers Datasources , select your datasource, and click View . The pool options are configurable under the datasource Pool tab. Timeout options are configurable under the datasource Timeouts tab. To use the management CLI, execute the following command: For example, to set the ExampleDS datasource min-pool-size attribute to a value of 5 connections, use the following command: 6.3.2. Configuring Resource Adapter Pool Attributes Prerequisites Deploy your resource adapter and add a connection definition. See Configuring Resource Adapters in the JBoss EAP Configuration Guide . You can configure resource adapter pool attributes using either the management CLI or the management console: To use the management console, navigate to Configuration Subsystems Resource Adapters , select your resource adapter, click View , and select Connection Definitions in the left menu. The pool options are configurable under the Pool tab. Timeout options are configurable under the Attributes tab. To use the management CLI, execute the following command: For example, to set the my_RA resource adapter my_CD connection definition min-pool-size attribute to a value of 5 connections, use the following command: | [
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=statistics-enabled,value=true)",
"/subsystem=datasources/data-source=ExampleDS/statistics=pool:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"ActiveCount\" => 1, \"AvailableCount\" => 20, \"AverageBlockingTime\" => 0L, \"AverageCreationTime\" => 122L, \"AverageGetTime\" => 128L, \"AveragePoolTime\" => 0L, \"AverageUsageTime\" => 0L, \"BlockingFailureCount\" => 0, \"CreatedCount\" => 1, \"DestroyedCount\" => 0, \"IdleCount\" => 1, }",
"/subsystem=datasources/data-source=ExampleDS/statistics=jdbc:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"PreparedStatementCacheAccessCount\" => 0L, \"PreparedStatementCacheAddCount\" => 0L, \"PreparedStatementCacheCurrentSize\" => 0, \"PreparedStatementCacheDeleteCount\" => 0L, \"PreparedStatementCacheHitCount\" => 0L, \"PreparedStatementCacheMissCount\" => 0L, \"statistics-enabled\" => true } }",
"/deployment= myRA.rar /subsystem=resource-adapters/statistics=statistics/connection-definitions= java\\:\\/eis\\/AcmeConnectionFactory :write-attribute(name=statistics-enabled,value=true)",
"deployment= myRA.rar /subsystem=resource-adapters/statistics=statistics/connection-definitions= java\\:\\/eis\\/AcmeConnectionFactory :read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"ActiveCount\" => \"1\", \"AvailableCount\" => \"20\", \"AverageBlockingTime\" => \"0\", \"AverageCreationTime\" => \"0\", \"CreatedCount\" => \"1\", \"DestroyedCount\" => \"0\", \"InUseCount\" => \"0\", \"MaxCreationTime\" => \"0\", \"MaxUsedCount\" => \"1\", \"MaxWaitCount\" => \"0\", \"MaxWaitTime\" => \"0\", \"TimedOut\" => \"0\", \"TotalBlockingTime\" => \"0\", \"TotalCreationTime\" => \"0\" } }",
"/subsystem=datasources/data-source= DATASOURCE_NAME /:write-attribute(name= ATTRIBUTE_NAME ,value= ATTRIBUTE_VALUE )",
"/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=min-pool-size,value=5)",
"/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER_NAME /connection-definitions= CONNECTION_DEFINITION_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= ATTRIBUTE_VALUE )",
"/subsystem=resource-adapters/resource-adapter=my_RA/connection-definitions=my_CD:write-attribute(name=min-pool-size,value=5)"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/performance_tuning_guide/datasource_and_resource_adapter_tuning |
probe::nfs.fop.write_iter | probe::nfs.fop.write_iter Name probe::nfs.fop.write_iter - NFS client write_iter file operation Synopsis nfs.fop.write_iter Values parent_name parent dir name count read bytes pos offset of the file dev device identifier file_name file name ino inode number | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-write-iter |
Monitoring APIs | Monitoring APIs OpenShift Container Platform 4.15 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/monitoring_apis/index |
Chapter 8. Using a Red Hat Image Builder Image for Provisioning | Chapter 8. Using a Red Hat Image Builder Image for Provisioning In Satellite, you can integrate with RHEL web console to perform actions and monitor your hosts. Using RHEL web console, you can access Red Hat Image Builder and build images that you can then upload to a HTTP server and use this image to provision hosts. When you configure Satellite for image provisioning, Anaconda installer partitions disks, downloads and mounts the image and copies files over to a host. The preferred image type is TAR. Note The blueprint to build the TAR image must always include a kernel package. For more information about integrating RHEL web console with Satellite, see Host Management and Monitoring Using RHEL web console in the Managing Hosts guide. Prerequisite An existing TAR image created using Red Hat Image Builder. Procedure On Satellite, create a custom product, add a custom file repository to this product, and upload the image to the repository. For more information, see Importing Individual ISO Images and Files in the Content Management Guide . In the Satellite web UI, navigate to Configure > Host Groups , and select the host group that you want to use. Click the Parameters tab, and then click Add Parameter . In the Name field, enter kickstart_liveimg . From the Type list, select string . In the Value field, enter the absolute path or a relative path in the following format custom/ product / repository / image_name that points to the exact location where you store the image. Click Submit to save your changes. You can use this image for bare metal provisioning and provisioning using a compute resource. For more information about bare metal provisioning, see Chapter 6, Using PXE to Provision Hosts . For more information about provisioning with different compute resources, see the relevant chapter for the compute resource that you want to use. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/using_an_image_builder_image_for_provisioning_provisioning |
Chapter 5. DNS Operator in OpenShift Container Platform | Chapter 5. DNS Operator in OpenShift Container Platform The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods, enabling DNS-based Kubernetes Service discovery in OpenShift Container Platform. 5.1. DNS Operator The DNS Operator implements the dns API from the operator.openshift.io API group. The Operator deploys CoreDNS using a daemon set, creates a service for the daemon set, and configures the kubelet to instruct pods to use the CoreDNS service IP address for name resolution. Procedure The DNS Operator is deployed during installation with a Deployment object. Use the oc get command to view the deployment status: USD oc get -n openshift-dns-operator deployment/dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h Use the oc get command to view the state of the DNS Operator: USD oc get clusteroperator/dns Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m AVAILABLE , PROGRESSING and DEGRADED provide information about the status of the operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition. 5.2. Changing the DNS Operator managementState DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. The managementState of the DNS Operator is set to Managed by default, which means that the DNS Operator is actively managing its resources. You can change it to Unmanaged , which means the DNS Operator is not managing its resources. The following are use cases for changing the DNS Operator managementState : You are a developer and want to test a configuration change to see if it fixes an issue in CoreDNS. You can stop the DNS Operator from overwriting the fix by setting the managementState to Unmanaged . You are a cluster administrator and have reported an issue with CoreDNS, but need to apply a workaround until the issue is fixed. You can set the managementState field of the DNS Operator to Unmanaged to apply the workaround. Procedure Change managementState DNS Operator: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' 5.3. Controlling DNS pod placement The DNS Operator has two daemon sets: one for CoreDNS and one for managing the /etc/hosts file. The daemon set for /etc/hosts must run on every node host to add an entry for the cluster image registry to support pulling images. Security policies can prohibit communication between pairs of nodes, which prevents the daemon set for CoreDNS from running on every node. As a cluster administrator, you can use a custom node selector to configure the daemon set for CoreDNS to run or not run on certain nodes. Prerequisites You installed the oc CLI. You are logged in to the cluster with a user with cluster-admin privileges. Procedure To prevent communication between certain nodes, configure the spec.nodePlacement.nodeSelector API field: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a node selector that includes only control plane nodes in the spec.nodePlacement.nodeSelector API field: spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: "" To allow the daemon set for CoreDNS to run on nodes, configure a taint and toleration: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a taint key and a toleration for the taint: spec: nodePlacement: tolerations: - effect: NoExecute key: "dns-only" operators: Equal value: abc tolerationSeconds: 3600 1 1 If the taint is dns-only , it can be tolerated indefinitely. You can omit tolerationSeconds . 5.4. View the default DNS Every new OpenShift Container Platform installation has a dns.operator named default . Procedure Use the oc describe command to view the default dns : USD oc describe dns.operator/default Example output Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS ... Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2 ... 1 The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. 2 The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. To find the service CIDR of your cluster, use the oc get command: USD oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}' Example output [172.30.0.0/16] 5.5. Using DNS forwarding You can use DNS forwarding to override the forwarding configuration identified in /etc/resolv.conf on a per-zone basis by specifying which name server should be used for a given zone. If the forwarded zone is the Ingress domain managed by OpenShift Container Platform, then the upstream name server must be authorized for the domain. Procedure Modify the DNS Operator object named default : USD oc edit dns.operator/default This allows the Operator to create and update the ConfigMap named dns-default with additional server configuration blocks based on Server . If none of the servers has a zone that matches the query, then name resolution falls back to the name servers that are specified in /etc/resolv.conf . Sample DNS apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: foo-server 1 zones: 2 - example.com forwardPlugin: upstreams: 3 - 1.1.1.1 - 2.2.2.2:5353 - name: bar-server zones: - bar.com - example.com forwardPlugin: upstreams: - 3.3.3.3 - 4.4.4.4:5454 1 name must comply with the rfc6335 service name syntax. 2 zones must conform to the definition of a subdomain in rfc1123 . The cluster domain, cluster.local , is an invalid subdomain for zones . 3 A maximum of 15 upstreams is allowed per forwardPlugin . Note If servers is undefined or invalid, the ConfigMap only contains the default server. View the ConfigMap: USD oc get configmap/dns-default -n openshift-dns -o yaml Sample DNS ConfigMap based on sample DNS apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { policy sequential } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns 1 Changes to the forwardPlugin triggers a rolling update of the CoreDNS daemon set. Additional resources For more information on DNS forwarding, see the CoreDNS forward documentation . 5.6. DNS Operator status You can inspect the status and view the details of the DNS Operator using the oc describe command. Procedure View the status of the DNS Operator: USD oc describe clusteroperators/dns 5.7. DNS Operator logs You can view DNS Operator logs by using the oc logs command. Procedure View the logs of the DNS Operator: USD oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator | [
"oc get -n openshift-dns-operator deployment/dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h",
"oc get clusteroperator/dns",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m",
"patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'",
"oc edit dns.operator/default",
"spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc edit dns.operator/default",
"spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" operators: Equal value: abc tolerationSeconds: 3600 1",
"oc describe dns.operator/default",
"Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2",
"oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'",
"[172.30.0.0/16]",
"oc edit dns.operator/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: foo-server 1 zones: 2 - example.com forwardPlugin: upstreams: 3 - 1.1.1.1 - 2.2.2.2:5353 - name: bar-server zones: - bar.com - example.com forwardPlugin: upstreams: - 3.3.3.3 - 4.4.4.4:5454",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { policy sequential } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns",
"oc describe clusteroperators/dns",
"oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/dns-operator |
14.4. OpenSSH Clients | 14.4. OpenSSH Clients To connect to an OpenSSH server from a client machine, you must have the openssh-clients and openssh packages installed (see Section 8.2.4, "Installing Packages" for more information on how to install new packages in Red Hat Enterprise Linux). 14.4.1. Using the ssh Utility The ssh utility allows you to log in to a remote machine and execute commands there. It is a secure replacement for the rlogin , rsh , and telnet programs. Similarly to the telnet command, log in to a remote machine by using the following command: ssh hostname For example, to log in to a remote machine named penguin.example.com , type the following at a shell prompt: This will log you in with the same user name you are using on the local machine. If you want to specify a different user name, use a command in the following form: ssh username @ hostname For example, to log in to penguin.example.com as john , type: The first time you initiate a connection, you will be presented with a message similar to this: Type yes to confirm. You will see a notice that the server has been added to the list of known hosts, and a prompt asking for your password: Important Update the host key of an SSH server if the key changes. The client notifies the user that the connection cannot proceed until the server's host key is deleted from the ~/.ssh/known_hosts file. Contact the system administrator of the SSH server to verify the server is not compromised, then remove the line with the name of the remote machine at the beginning. After entering the password, you will be provided with a shell prompt for the remote machine. Alternatively, the ssh program can be used to execute a command on the remote machine without logging in to a shell prompt: ssh [ username @ ] hostname command For example, the /etc/redhat-release file provides information about the Red Hat Enterprise Linux version. To view the contents of this file on penguin.example.com , type: After you enter the correct password, the user name will be displayed, and you will return to your local shell prompt. | [
"~]USD ssh penguin.example.com",
"~]USD ssh [email protected]",
"The authenticity of host 'penguin.example.com' can't be established. RSA key fingerprint is 94:68:3a:3a:bc:f3:9a:9b:01:5d:b3:07:38:e2:11:0c. Are you sure you want to continue connecting (yes/no)?",
"Warning: Permanently added 'penguin.example.com' (RSA) to the list of known hosts. [email protected]'s password:",
"~]USD ssh [email protected] cat /etc/redhat-release [email protected]'s password: Red Hat Enterprise Linux Server release 6.2 (Santiago)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-ssh-clients |
Chapter 6. Creating and managing certificate profiles in Identity Management | Chapter 6. Creating and managing certificate profiles in Identity Management Certificate profiles are used by the Certificate Authority (CA) when signing certificates to determine if a certificate signing request (CSR) is acceptable, and if so what features and extensions are present on the certificate. A certificate profile is associated with issuing a particular type of certificate. By combining certificate profiles and CA access control lists (ACLs), you can define and control access to custom certificate profiles. In describing how to create certificate profiles, the procedures use S/MIME certificates as an example. Some email programs support digitally signed and encrypted email using the Secure Multipurpose Internet Mail Extension (S/MIME) protocol. Using S/MIME to sign or encrypt email messages requires the sender of the message to have an S/MIME certificate. What is a certificate profile Creating a certificate profile What is a CA access control list Defining a CA ACL to control access to certificate profiles Using certificate profiles and CA ACLs to issue certificates Modifying a certificate profile Certificate profile configuration parameters 6.1. What is a certificate profile? You can use certificate profiles to determine the content of certificates, as well as constraints for issuing the certificates, such as the following: The signing algorithm to use to encipher the certificate signing request. The default validity of the certificate. The revocation reasons that can be used to revoke a certificate. If the common name of the principal is copied to the subject alternative name field. The features and extensions that should be present on the certificate. A single certificate profile is associated with issuing a particular type of certificate. You can define different certificate profiles for users, services, and hosts in IdM. IdM includes the following certificate profiles by default: caIPAserviceCert IECUserRoles KDCs_PKINIT_Certs (used internally) In addition, you can create and import custom profiles, which allow you to issue certificates for specific purposes. For example, you can restrict the use of a particular profile to only one user or one group, preventing other users and groups from using that profile to issue a certificate for authentication. To create custom certificate profiles, use the ipa certprofile command. Additional resources See the ipa help certprofile command. 6.2. Creating a certificate profile Follow this procedure to create a certificate profile through the command line by creating a profile configuration file for requesting S/MIME certificates. Procedure Create a custom profile by copying an existing default profile: Open the newly created profile configuration file in a text editor. Change the Profile ID to a name that reflects the usage of the profile, for example smime . Note When you are importing a newly created profile, the profileId field, if present, must match the ID specified on the command line. Update the Extended Key Usage configuration. The default Extended Key Usage extension configuration is for TLS server and client authentication. For example for S/MIME, the Extended Key Usage must be configured for email protection: Import the new profile: Verification Verify the new certificate profile has been imported: Additional resources See ipa help certprofile . See RFC 5280, section 4.2.1.12 . 6.3. What is a CA access control list? Certificate Authority access control list (CA ACL) rules define which profiles can be used to issue certificates to which principals. You can use CA ACLs to do this, for example: Determine which user, host, or service can be issued a certificate with a particular profile Determine which IdM certificate authority or sub-CA is permitted to issue the certificate For example, using CA ACLs, you can restrict use of a profile intended for employees working from an office located in London only to users that are members of the London office-related IdM user group. The ipa caacl utility for management of CA ACL rules allows privileged users to add, display, modify, or delete a specified CA ACL. Additional resources See ipa help caacl . 6.4. Defining a CA ACL to control access to certificate profiles Follow this procedure to use the caacl utility to define a CA Access Control List (ACL) rule to allow users in a group access to a custom certificate profile. In this case, the procedure describes how to create an S/MIME user's group and a CA ACL to allow users in that group access to the smime certificate profile. Prerequisites Make sure that you have obtained IdM administrator's credentials. Procedure Create a new group for the users of the certificate profile: Create a new user to add to the smime_user_group group: Add the smime_user to the smime_users_group group: Create the CA ACL to allow users in the group to access the certificate profile: Add the user group to the CA ACL: Add the certificate profile to the CA ACL: Verification View the details of the CA ACL you created: Additional resources See ipa man page on your system. See ipa help caacl . 6.5. Using certificate profiles and CA ACLs to issue certificates You can request certificates using a certificate profile when permitted by the Certificate Authority access control lists (CA ACLs). Follow this procedure to request an S/MIME certificate for a user using a custom certificate profile which has been granted access through a CA ACL. Prerequisites Your certificate profile has been created. An CA ACL has been created which permits the user to use the required certificate profile to request a certificate. Note You can bypass the CA ACL check if the user performing the cert-request command: Is the admin user. Has the Request Certificate ignoring CA ACLs permission. Procedure Generate a certificate request for the user. For example, using OpenSSL: Request a new certificate for the user from the IdM CA: Optionally pass the --ca sub-CA_name option to the command to request the certificate from a sub-CA instead of the root CA. Verification Verify the newly-issued certificate is assigned to the user: Additional resources ipa(a) and openssl(lssl) man pages on your system ipa help user-show command ipa help cert-request command 6.6. Modifying a certificate profile Follow this procedure to modify certificate profiles directly through the command line using the ipa certprofile-mod command. Procedure Determine the certificate profile ID for the certificate profile you are modifying. To display all certificate profiles currently stored in IdM: Modify the certificate profile description. For example, if you created a custom certificate profile for S/MIME certificates using an existing profile, change the description in line with the new usage: Open your customer certificate profile file in a text editor and modify to suit your requirements: For details on the options which can be configured in the certificate profile configuration file, see Certificate profile configuration parameters . Update the existing certificate profile configuration file: Verification Verify the certificate profile has been updated: Additional resources See ipa(a) man page on your system. See ipa help certprofile-mod . 6.7. Certificate profile configuration parameters Certificate profile configuration parameters are stored in a profile_name .cfg file in the CA profile directory, /var/lib/pki/pki-tomcat/ca/profiles/ca . All of the parameters for a profile - defaults, inputs, outputs, and constraints - are configured within a single policy set. A policy set for a certificate profile has the name policyset. policyName.policyNumber . For example, for policy set serverCertSet : Each policy set contains a list of policies configured for the certificate profile by policy ID number in the order in which they should be evaluated. The server evaluates each policy set for each request it receives. When a single certificate request is received, one set is evaluated, and any other sets in the profile are ignored. When dual key pairs are issued, the first policy set is evaluated for the first certificate request, and the second set is evaluated for the second certificate request. You do not need more than one policy set when issuing single certificates or more than two sets when issuing dual key pairs. Table 6.1. Certificate profile configuration file parameters Parameter Description desc A free text description of the certificate profile, which is shown on the end-entities page. For example, desc=This certificate profile is for enrolling server certificates with agent authentication . enable Enables the profile so it is accessible through the end-entities page. For example, enable=true . auth.instance_id Sets the authentication manager plug-in to use to authenticate the certificate request. For automatic enrollment, the CA issues a certificate immediately if the authentication is successful. If authentication fails or there is no authentication plug-in specified, the request is queued to be manually approved by an agent. For example, auth.instance_id=AgentCertAuth . authz.acl Specifies the authorization constraint. This is predominantly used to set the group evaluation Access Control List (ACL). For example, the caCMCUserCert parameter requires that the signer of the CMC request belongs to the Certificate Manager Agents group: authz.acl=group="Certificate Manager Agents In directory-based user certificate renewal, this option is used to ensure that the original requester and the currently-authenticated user are the same. An entity must authenticate (bind or, essentially, log into the system) before authorization can be evaluated. name The name of the certificate profile. For example, name=Agent-Authenticated Server Certificate Enrollment . This name is displayed on the end users enrollment or renewal page. input.list Lists the allowed inputs for the certificate profile by name. For example, input.list=i1,i2 . input.input_id.class_id Indicates the java class name for the input by input ID (the name of the input listed in input.list). For example, input.i1.class_id=certReqInputImpl . output.list Lists the possible output formats for the certificate profile by name. For example, output.list=o1 . output.output_id.class_id Specifies the java class name for the output format named in output.list. For example, output.o1.class_id=certOutputImpl . policyset.list Lists the configured certificate profile rules. For dual certificates, one set of rules applies to the signing key and the other to the encryption key. Single certificates use only one set of certificate profile rules. For example, policyset.list=serverCertSet . policyset.policyset_id.list Lists the policies within the policy set configured for the certificate profile by policy ID number in the order in which they should be evaluated. For example, policyset.serverCertSet.list=1,2,3,4,5,6,7,8 . policyset.policyset_id.policy_number.constraint.class_id Indicates the java class name of the constraint plug-in set for the default configured in the profile rule. For example, policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl. policyset.policyset_id.policy_number.constraint.name Gives the user-defined name of the constraint. For example, policyset.serverCertSet.1.constraint.name=Subject Name Constraint. policyset.policyset_id.policy_number.constraint.params.attribute Specifies a value for an allowed attribute for the constraint. The possible attributes vary depending on the type of constraint. For example, policyset.serverCertSet.1.constraint.params.pattern=CN=.*. policyset.policyset_id.policy_number.default.class_id Gives the java class name for the default set in the profile rule. For example, policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl policyset.policyset_id.policy_number.default.name Gives the user-defined name of the default. For example, policyset.serverCertSet.1.default.name=Subject Name Default policyset.policyset_id.policy_number.default.params.attribute Specifies a value for an allowed attribute for the default. The possible attributes vary depending on the type of default. For example, policyset.serverCertSet.1.default.params.name=CN=(Name)USDrequest.requestor_nameUSD. | [
"ipa certprofile-show --out smime.cfg caIPAserviceCert ------------------------------------------------ Profile configuration stored in file 'smime.cfg' ------------------------------------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE",
"vi smime.cfg",
"policyset.serverCertSet.7.default.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.4",
"ipa certprofile-import smime --file smime.cfg --desc \"S/MIME certificates\" --store TRUE ------------------------ Imported profile \"smime\" ------------------------ Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE",
"ipa certprofile-find ------------------ 4 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles Profile description: User profile that includes IECUserRoles extension from request Store issued certificates: TRUE Profile ID: KDCs_PKINIT_Certs Profile description: Profile for PKINIT support by KDCs Store issued certificates: TRUE Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE ---------------------------- Number of entries returned 4 ----------------------------",
"ipa group-add smime_users_group --------------------------------- Added group \"smime users group\" --------------------------------- Group name: smime_users_group GID: 75400001",
"ipa user-add smime_user First name: smime Last name: user ---------------------- Added user \"smime_user\" ---------------------- User login: smime_user First name: smime Last name: user Full name: smime user Display name: smime user Initials: TU Home directory: /home/smime_user GECOS: smime user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1505000004 GID: 1505000004 Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa group-add-member smime_users_group --users=smime_user Group name: smime_users_group GID: 1505000003 Member users: smime_user ------------------------- Number of members added 1 -------------------------",
"ipa caacl-add smime_acl ------------------------ Added CA ACL \"smime_acl\" ------------------------ ACL name: smime_acl Enabled: TRUE",
"ipa caacl-add-user smime_acl --group smime_users_group ACL name: smime_acl Enabled: TRUE User Groups: smime_users_group ------------------------- Number of members added 1 -------------------------",
"ipa caacl-add-profile smime_acl --certprofile smime ACL name: smime_acl Enabled: TRUE Profiles: smime User Groups: smime_users_group ------------------------- Number of members added 1 -------------------------",
"ipa caacl-show smime_acl ACL name: smime_acl Enabled: TRUE Profiles: smime User Groups: smime_users_group",
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout private.key -out cert.csr -subj '/CN= smime_user '",
"ipa cert-request cert.csr --principal= smime_user --profile-id= smime",
"ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA",
"ipa certprofile-find ------------------ 4 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE -------------------------- Number of entries returned --------------------------",
"ipa certprofile-mod smime --desc \"New certificate profile description\" ------------------------------------ Modified Certificate Profile \"smime\" ------------------------------------ Profile ID: smime Profile description: New certificate profile description Store issued certificates: TRUE",
"vi smime.cfg",
"ipa certprofile-mod _profile_ID_ --file=smime.cfg",
"ipa certprofile-show smime Profile ID: smime Profile description: New certificate profile description Store issued certificates: TRUE",
"policyset.list=serverCertSet policyset.serverCertSet.list=1,2,3,4,5,6,7,8 policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params.pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params.name=CN=USDrequest.req_subject_name.cnUSD, OU=pki-ipa, O=IPA policyset.serverCertSet.2.constraint.class_id=validityConstraintImpl policyset.serverCertSet.2.constraint.name=Validity Constraint policyset.serverCertSet.2.constraint.params.range=740 policyset.serverCertSet.2.constraint.params.notBeforeCheck=false policyset.serverCertSet.2.constraint.params.notAfterCheck=false policyset.serverCertSet.2.default.class_id=validityDefaultImpl policyset.serverCertSet.2.default.name=Validity Default policyset.serverCertSet.2.default.params.range=731 policyset.serverCertSet.2.default.params.startTime=0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/creating-and-managing-certificate-profiles-in-identity-management_managing-certificates-in-idm |
Chapter 220. Metrics Component | Chapter 220. Metrics Component 220.1. Metrics Component The metrics: component allows to collect various metrics directly from Camel routes. Supported metric types are counter , histogram , meter , timer and gauge . Metrics provides simple way to measure behaviour of application. Configurable reporting backend is enabling different integration options for collecting and visualizing statistics. The component also provides a MetricsRoutePolicyFactory which allows to expose route statistics using Dropwizard Metrics, see bottom of page for details. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-metrics</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 220.2. URI format 220.3. Options The Metrics component supports 2 options, which are listed below. Name Description Default Type metricRegistry (advanced) To use a custom configured MetricRegistry. MetricRegistry resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Metrics endpoint is configured using URI syntax: with the following path and query parameters: 220.3.1. Path Parameters (2 parameters): Name Description Default Type metricsType Required Type of metrics MetricsType metricsName Required Name of metrics String 220.3.2. Query Parameters (7 parameters): Name Description Default Type action (producer) Action when using timer type MetricsTimerAction decrement (producer) Decrement value when using counter type Long increment (producer) Increment value when using counter type Long mark (producer) Mark when using meter type Long subject (producer) Subject value when using gauge type Object value (producer) Value value when using histogram type Long synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 220.4. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.metrics.enabled Enable metrics component true Boolean camel.component.metrics.metric-registry To use a custom configured MetricRegistry. The option is a com.codahale.metrics.MetricRegistry type. String camel.component.metrics.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 220.5. Metric Registry Camel Metrics component uses by default a MetricRegistry instance with a Slf4jReporter that has a 60 second reporting interval. This default registry can be replaced with a custom one by providing a MetricRegistry bean. If multiple MetricRegistry beans exist in the application, the one with name metricRegistry is used. For example using Spring Java Configuration: @Configuration public static class MyConfig extends SingleRouteCamelConfiguration { @Bean @Override public RouteBuilder route() { return new RouteBuilder() { @Override public void configure() throws Exception { // define Camel routes here } }; } @Bean(name = MetricsComponent.METRIC_REGISTRY_NAME) public MetricRegistry getMetricRegistry() { MetricRegistry registry = ...; return registry; } } Or using CDI: class MyBean extends RouteBuilder { @Override public void configure() { from("...") // Register the 'my-meter' meter in the MetricRegistry below .to("metrics:meter:my-meter"); } @Produces // If multiple MetricRegistry beans // @Named(MetricsComponent.METRIC_REGISTRY_NAME) MetricRegistry registry() { MetricRegistry registry = new MetricRegistry(); // ... return registry; } } 220.6. Usage Each metric has type and name. Supported types are counter , histogram , meter , timer and gauge . Metric name is simple string. If metric type is not provided then type meter is used by default. 220.6.1. Headers Metric name defined in URI can be overridden by using header with name CamelMetricsName . For example from("direct:in") .setHeader(MetricsConstants.HEADER_METRIC_NAME, constant("new.name")) .to("metrics:counter:name.not.used") .to("direct:out"); will update counter with name new.name instead of name.not.used . All Metrics specific headers are removed from the message once Metrics endpoint finishes processing of exchange. While processing exchange Metrics endpoint will catch all exceptions and write log entry using level warn . 220.7. Metrics type counter 220.7.1. Options Name Default Description increment - Long value to add to the counter decrement - Long value to subtract from the counter If neither increment or decrement is defined then counter value will be incremented by one. If increment and decrement are both defined only increment operation is called. // update counter simple.counter by 7 from("direct:in") .to("metric:counter:simple.counter?increment=7") .to("direct:out"); // increment counter simple.counter by 1 from("direct:in") .to("metric:counter:simple.counter") .to("direct:out"); // decrement counter simple.counter by 3 from("direct:in") .to("metrics:counter:simple.counter?decrement=3") .to("direct:out"); 220.7.2. Headers Message headers can be used to override increment and decrement values specified in Metrics component URI. Name Description Expected type CamelMetricsCounterIncrement Override increment value in URI Long CamelMetricsCounterDecrement Override decrement value in URI Long // update counter simple.counter by 417 from("direct:in") .setHeader(MetricsConstants.HEADER_COUNTER_INCREMENT, constant(417L)) .to("metrics:counter:simple.counter?increment=7") .to("direct:out"); // updates counter using simple language to evaluate body.length from("direct:in") .setHeader(MetricsConstants.HEADER_COUNTER_INCREMENT, simple("USD{body.length}")) .to("metrics:counter:body.length") .to("mock:out"); 220.8. Metric type histogram 220.8.1. Options Name Default Description value - Value to use in histogram If no value is not set nothing is added to histogram and warning is logged. // adds value 9923 to simple.histogram from("direct:in") .to("metric:histogram:simple.histogram?value=9923") .to("direct:out"); // nothing is added to simple.histogram; warning is logged from("direct:in") .to("metric:histogram:simple.histogram") .to("direct:out"); 220.8.2. Headers Message header can be used to override value specified in Metrics component URI. Name Description Expected type CamelMetricsHistogramValue Override histogram value in URI Long // adds value 992 to simple.histogram from("direct:in") .setHeader(MetricsConstants.HEADER_HISTOGRAM_VALUE, constant(992L)) .to("metrics:histogram:simple.histogram?value=700") .to("direct:out") 220.9. Metric type meter 220.9.1. Options Name Default Description mark - Long value to use as mark If mark is not set then meter.mark() is called without argument. // marks simple.meter without value from("direct:in") .to("metric:simple.meter") .to("direct:out"); // marks simple.meter with value 81 from("direct:in") .to("metric:meter:simple.meter?mark=81") .to("direct:out"); 220.9.2. Headers Message header can be used to override mark value specified in Metrics component URI. Name Description Expected type CamelMetricsMeterMark Override mark value in URI Long // updates meter simple.meter with value 345 from("direct:in") .setHeader(MetricsConstants.HEADER_METER_MARK, constant(345L)) .to("metrics:meter:simple.meter?mark=123") .to("direct:out"); 220.10. Metrics type timer 220.10.1. Options Name Default Description action - start or stop If no action or invalid value is provided then warning is logged without any timer update. If action start is called on already running timer or stop is called on not running timer then nothing is updated and warning is logged. // measure time taken by route "calculate" from("direct:in") .to("metrics:timer:simple.timer?action=start") .to("direct:calculate") .to("metrics:timer:simple.timer?action=stop"); TimerContext objects are stored as Exchange properties between different Metrics component calls. 220.10.2. Headers Message header can be used to override action value specified in Metrics component URI. Name Description Expected type CamelMetricsTimerAction Override timer action in URI org.apache.camel.component.metrics.timer.TimerEndpoint.TimerAction // sets timer action using header from("direct:in") .setHeader(MetricsConstants.HEADER_TIMER_ACTION, TimerAction.start) .to("metrics:timer:simple.timer") .to("direct:out"); 220.11. Metric type gauge 220.11.1. Options Name Default Description subject - Any object to be observed by the gauge If subject is not defined it's simply ignored, i.e. the gauge is not registered. // update gauge "simple.gauge" by a bean "mySubjectBean" from("direct:in") .to("metrics:gauge:simple.gauge?subject=#mySubjectBean") .to("direct:out"); 220.11.2. Headers Message headers can be used to override subject values specified in Metrics component URI. Note: if CamelMetricsName header is specified, then new gauge is registered in addition to default one specified in a URI. Name Description Expected type CamelMetricsGaugeSubject Override subject value in URI Object // update gauge simple.gauge by a String literal "myUpdatedSubject" from("direct:in") .setHeader(MetricsConstants.HEADER_GAUGE_SUBJECT, constant("myUpdatedSubject")) .to("metrics:counter:simple.gauge?subject=#mySubjectBean") .to("direct:out"); 220.12. MetricsRoutePolicyFactory This factory allows to add a RoutePolicy for each route which exposes route utilization statistics using Dropwizard metrics. This factory can be used in Java and XML as the examples below demonstrates. Note Instead of using the MetricsRoutePolicyFactory you can define a MetricsRoutePolicy per route you want to instrument, in case you only want to instrument a few selected routes. From Java you just add the factory to the CamelContext as shown below: context.addRoutePolicyFactory(new MetricsRoutePolicyFactory()); And from XML DSL you define a <bean> as follows: <!-- use camel-metrics route policy to gather metrics for all routes --> <bean id="metricsRoutePolicyFactory" class="org.apache.camel.component.metrics.routepolicy.MetricsRoutePolicyFactory"/> The MetricsRoutePolicyFactory and MetricsRoutePolicy supports the following options: Name Default Description useJmx false Whether to report fine grained statistics to JMX by using the com.codahale.metrics.JmxReporter . Notice that if JMX is enabled on CamelContext then a MetricsRegistryService mbean is enlisted under the services type in the JMX tree. That mbean has a single operation to output the statistics using json. Setting useJmx to true is only needed if you want fine grained mbeans per statistics type. jmxDomain org.apache.camel.metrics The JMX domain name prettyPrint false Whether to use pretty print when outputting statistics in json format metricsRegistry Allow to use a shared com.codahale.metrics.MetricRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. rateUnit TimeUnit.SECONDS The unit to use for rate in the metrics reporter or when dumping the statistics as json. durationUnit TimeUnit.MILLISECONDS The unit to use for duration in the metrics reporter or when dumping the statistics as json. namePattern name . routeId . type Camel 2.17: The name pattern to use. Uses dot as separators, but you can change that. The values name , routeId , and type will be replaced with actual value. Where name is the name of the CamelContext. routeId is the name of the route. And type is the value of responses. From Java code you can get hold of the com.codahale.metrics.MetricRegistry from the org.apache.camel.component.metrics.routepolicy.MetricsRegistryService as shown below: MetricRegistryService registryService = context.hasService(MetricsRegistryService.class); if (registryService != null) { MetricsRegistry registry = registryService.getMetricsRegistry(); ... } 220.13. MetricsMessageHistoryFactory Available as of Camel 2.17 This factory allows to use metrics to capture Message History performance statistics while routing messages. It works by using a metrics Timer for each node in all the routes. This factory can be used in Java and XML as the examples below demonstrates. From Java you just set the factory to the CamelContext as shown below: context.setMessageHistoryFactory(new MetricsMessageHistoryFactory()); And from XML DSL you define a <bean> as follows: <!-- use camel-metrics message history to gather metrics for all messages being routed --> <bean id="metricsMessageHistoryFactory" class="org.apache.camel.component.metrics.messagehistory.MetricsMessageHistoryFactory"/> The following options is supported on the factory: Name Default Description useJmx false Whether to report fine grained statistics to JMX by using the com.codahale.metrics.JmxReporter . Notice that if JMX is enabled on CamelContext then a MetricsRegistryService mbean is enlisted under the services type in the JMX tree. That mbean has a single operation to output the statistics using json. Setting useJmx to true is only needed if you want fine grained mbeans per statistics type. jmxDomain org.apache.camel.metrics The JMX domain name prettyPrint false Whether to use pretty print when outputting statistics in json format metricsRegistry Allow to use a shared com.codahale.metrics.MetricRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. rateUnit TimeUnit.SECONDS The unit to use for rate in the metrics reporter or when dumping the statistics as json. durationUnit TimeUnit.MILLISECONDS The unit to use for duration in the metrics reporter or when dumping the statistics as json. namePattern name . routeId . id . type The name pattern to use. Uses dot as separators, but you can change that. The values name , routeId , type , and id will be replaced with actual value. Where name is the name of the CamelContext. routeId is the name of the route. The id pattern represents the node id. And type is the value of history. At runtime the metrics can be accessed from Java API or JMX which allows to gather the data as json output. From Java code you can do get the service from the CamelContext as shown: MetricsMessageHistoryService service = context.hasService(MetricsMessageHistoryService.class); String json = service.dumpStatisticsAsJson(); And the JMX API the MBean is registered in the type=services tree with name=MetricsMessageHistoryService . 220.14. InstrumentedThreadPoolFactory Available as of Camel 2.18 This factory allows you to gather performance information about Camel Thread Pools by injecting a InstrumentedThreadPoolFactory which collects information from inside of Camel. See more details at Advanced configuration of CamelContext using Spring 220.15. See Also The camel-example-cdi-metrics example that illustrates the integration between Camel, Metrics and CDI. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-metrics</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"metrics:[ meter | counter | histogram | timer | gauge ]:metricname[?options]",
"metrics:metricsType:metricsName",
"@Configuration public static class MyConfig extends SingleRouteCamelConfiguration { @Bean @Override public RouteBuilder route() { return new RouteBuilder() { @Override public void configure() throws Exception { // define Camel routes here } }; } @Bean(name = MetricsComponent.METRIC_REGISTRY_NAME) public MetricRegistry getMetricRegistry() { MetricRegistry registry = ...; return registry; } }",
"class MyBean extends RouteBuilder { @Override public void configure() { from(\"...\") // Register the 'my-meter' meter in the MetricRegistry below .to(\"metrics:meter:my-meter\"); } @Produces // If multiple MetricRegistry beans // @Named(MetricsComponent.METRIC_REGISTRY_NAME) MetricRegistry registry() { MetricRegistry registry = new MetricRegistry(); // return registry; } }",
"from(\"direct:in\") .setHeader(MetricsConstants.HEADER_METRIC_NAME, constant(\"new.name\")) .to(\"metrics:counter:name.not.used\") .to(\"direct:out\");",
"metrics:counter:metricname[?options]",
"// update counter simple.counter by 7 from(\"direct:in\") .to(\"metric:counter:simple.counter?increment=7\") .to(\"direct:out\");",
"// increment counter simple.counter by 1 from(\"direct:in\") .to(\"metric:counter:simple.counter\") .to(\"direct:out\");",
"// decrement counter simple.counter by 3 from(\"direct:in\") .to(\"metrics:counter:simple.counter?decrement=3\") .to(\"direct:out\");",
"// update counter simple.counter by 417 from(\"direct:in\") .setHeader(MetricsConstants.HEADER_COUNTER_INCREMENT, constant(417L)) .to(\"metrics:counter:simple.counter?increment=7\") .to(\"direct:out\");",
"// updates counter using simple language to evaluate body.length from(\"direct:in\") .setHeader(MetricsConstants.HEADER_COUNTER_INCREMENT, simple(\"USD{body.length}\")) .to(\"metrics:counter:body.length\") .to(\"mock:out\");",
"metrics:histogram:metricname[?options]",
"// adds value 9923 to simple.histogram from(\"direct:in\") .to(\"metric:histogram:simple.histogram?value=9923\") .to(\"direct:out\");",
"// nothing is added to simple.histogram; warning is logged from(\"direct:in\") .to(\"metric:histogram:simple.histogram\") .to(\"direct:out\");",
"// adds value 992 to simple.histogram from(\"direct:in\") .setHeader(MetricsConstants.HEADER_HISTOGRAM_VALUE, constant(992L)) .to(\"metrics:histogram:simple.histogram?value=700\") .to(\"direct:out\")",
"metrics:meter:metricname[?options]",
"// marks simple.meter without value from(\"direct:in\") .to(\"metric:simple.meter\") .to(\"direct:out\");",
"// marks simple.meter with value 81 from(\"direct:in\") .to(\"metric:meter:simple.meter?mark=81\") .to(\"direct:out\");",
"// updates meter simple.meter with value 345 from(\"direct:in\") .setHeader(MetricsConstants.HEADER_METER_MARK, constant(345L)) .to(\"metrics:meter:simple.meter?mark=123\") .to(\"direct:out\");",
"metrics:timer:metricname[?options]",
"// measure time taken by route \"calculate\" from(\"direct:in\") .to(\"metrics:timer:simple.timer?action=start\") .to(\"direct:calculate\") .to(\"metrics:timer:simple.timer?action=stop\");",
"// sets timer action using header from(\"direct:in\") .setHeader(MetricsConstants.HEADER_TIMER_ACTION, TimerAction.start) .to(\"metrics:timer:simple.timer\") .to(\"direct:out\");",
"metrics:gauge:metricname[?options]",
"// update gauge \"simple.gauge\" by a bean \"mySubjectBean\" from(\"direct:in\") .to(\"metrics:gauge:simple.gauge?subject=#mySubjectBean\") .to(\"direct:out\");",
"// update gauge simple.gauge by a String literal \"myUpdatedSubject\" from(\"direct:in\") .setHeader(MetricsConstants.HEADER_GAUGE_SUBJECT, constant(\"myUpdatedSubject\")) .to(\"metrics:counter:simple.gauge?subject=#mySubjectBean\") .to(\"direct:out\");",
"context.addRoutePolicyFactory(new MetricsRoutePolicyFactory());",
"<!-- use camel-metrics route policy to gather metrics for all routes --> <bean id=\"metricsRoutePolicyFactory\" class=\"org.apache.camel.component.metrics.routepolicy.MetricsRoutePolicyFactory\"/>",
"MetricRegistryService registryService = context.hasService(MetricsRegistryService.class); if (registryService != null) { MetricsRegistry registry = registryService.getMetricsRegistry(); }",
"context.setMessageHistoryFactory(new MetricsMessageHistoryFactory());",
"<!-- use camel-metrics message history to gather metrics for all messages being routed --> <bean id=\"metricsMessageHistoryFactory\" class=\"org.apache.camel.component.metrics.messagehistory.MetricsMessageHistoryFactory\"/>",
"MetricsMessageHistoryService service = context.hasService(MetricsMessageHistoryService.class); String json = service.dumpStatisticsAsJson();"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/metrics-component |
5.2. Generating SELinux Policy Modules: sepolicy generate | 5.2. Generating SELinux Policy Modules: sepolicy generate In versions of Red Hat Enterprise Linux, the sepolgen or selinux-polgengui utilities were used for generating a SELinux policy. These tools have been merged to the sepolicy suite. In Red Hat Enterprise Linux 7, the sepolicy generate command is used to generate an initial SELinux policy module template. Unlike sepolgen , it is not necessary to run sepolicy generate as the root user. This utility also creates an RPM spec file, which can be used to build an RPM package that installs the policy package file ( NAME .pp ) and the interface file ( NAME .if ) to the correct location, provides installation of the SELinux policy into the kernel, and fixes the labeling. The setup script continues to install SELinux policy and sets up the labeling. In addition, a manual page based on the installed policy is generated using the sepolicy manpage command. [7] Finally, sepolicy generate builds and compiles the SELinux policy and the manual page into an RPM package, ready to be installed on other systems. When sepolicy generate is executed, the following files are produced: NAME .te - type enforcing file This file defines all the types and rules for a particular domain. NAME .if - interface file This file defines the default file context for the system. It takes the file types created in the NAME.te file and associates file paths to the types. Utilities, such as restorecon and rpm , use these paths to write labels. NAME _selinux.spec - RPM spec file This file is an RPM spec file that installs SELinux policy and sets up the labeling. This file also installs the interface file and a man page describing the policy. You can use the sepolicy manpage -d NAME command to generate the man page. NAME .sh - helper shell script This script helps to compile, install, and fix the labeling on the system. It also generates a man page based on the installed policy, compiles, and builds an RPM package suitable to be installed on other systems. If it is possible to generate an SELinux policy module, sepolicy generate prints out all generated paths from the source domain to the target domain. See the sepolicy-generate (8) manual page for further information about sepolicy generate . [7] See Section 5.4, "Generating Manual Pages: sepolicy manpage " for more information about sepolicy manpage . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/security-enhanced_linux-the-sepolicy-suite-sepolicy_generate |
Chapter 1. About OpenShift Container Platform monitoring | Chapter 1. About OpenShift Container Platform monitoring 1.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. You also have the option to enable monitoring for user-defined projects . A cluster administrator can configure the monitoring stack with the supported configurations. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the OpenShift Container Platform web console, you can access metrics and manage alerts . After installing OpenShift Container Platform, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. As a cluster administrator, you can find answers to common problems such as user metrics unavailability and high consumption of disk space by Prometheus in Troubleshooting monitoring issues . 1.2. Monitoring stack architecture The OpenShift Container Platform monitoring stack is based on the Prometheus open source project and its wider ecosystem. The monitoring stack includes default monitoring components and components for monitoring user-defined projects. 1.2.1. Understanding the monitoring stack The monitoring stack includes the following components: Default platform monitoring components . A set of platform monitoring components are installed in the openshift-monitoring project by default during an OpenShift Container Platform installation. This provides monitoring for core cluster components including Kubernetes services. The default monitoring stack also enables remote health monitoring for clusters. These components are illustrated in the Installed by default section in the following diagram. Components for monitoring user-defined projects . After optionally enabling monitoring for user-defined projects, additional monitoring components are installed in the openshift-user-workload-monitoring project. This provides monitoring for user-defined projects. These components are illustrated in the User section in the following diagram. 1.2.2. Default monitoring components By default, the OpenShift Container Platform 4.17 monitoring stack includes these components: Table 1.1. Default monitoring stack components Component Description Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys, manages, and automatically updates Prometheus and Alertmanager instances, Thanos Querier, Telemeter Client, and metrics targets. The CMO is deployed by the Cluster Version Operator (CVO). Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus instances and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Metrics Server The Metrics Server component (MS in the preceding diagram) collects resource metrics and exposes them in the metrics.k8s.io Metrics API service for use by other tools and APIs, which frees the core platform Prometheus stack from handling this functionality. Note that with the OpenShift Container Platform 4.16 release, Metrics Server replaces Prometheus Adapter. Alertmanager The Alertmanager service handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. kube-state-metrics agent The kube-state-metrics exporter agent (KSM in the preceding diagram) converts Kubernetes objects to metrics that Prometheus can use. monitoring-plugin The monitoring-plugin dynamic plugin component deploys the monitoring pages in the Observe section of the OpenShift Container Platform web console. You can use Cluster Monitoring Operator config map settings to manage monitoring-plugin resources for the web console pages. openshift-state-metrics agent The openshift-state-metrics exporter (OSM in the preceding diagram) expands upon kube-state-metrics by adding metrics for OpenShift Container Platform-specific resources. node-exporter agent The node-exporter agent (NE in the preceding diagram) collects metrics about every node in a cluster. The node-exporter agent is deployed on every node. Thanos Querier Thanos Querier aggregates and optionally deduplicates core OpenShift Container Platform metrics and metrics for user-defined projects under a single, multi-tenant interface. Telemeter Client Telemeter Client sends a subsection of the data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. Note All components of the monitoring stack use the TLS security profile settings that are centrally configured by a cluster administrator. If you configure a monitoring stack component that uses TLS security settings, the component uses the TLS security profile settings that already exist in the tlsSecurityProfile field in the global OpenShift Container Platform apiservers.config.openshift.io/cluster resource. 1.2.2.1. Default monitoring targets In addition to the components of the stack itself, the default monitoring stack monitors additional platform components. The following are examples of monitoring targets: CoreDNS etcd HAProxy Image registry Kubelets Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift Controller Manager Operator Lifecycle Manager (OLM) Note The exact list of targets can vary depending on your cluster capabilities and installed components. Each OpenShift Container Platform component is responsible for its monitoring configuration. For problems with the monitoring of an OpenShift Container Platform component, open a Jira issue against that component, not against the general monitoring component. Other OpenShift Container Platform framework components might be exposing metrics as well. For details, see their respective documentation. Additional resources Getting detailed information about a metrics target 1.2.3. Components for monitoring user-defined projects OpenShift Container Platform includes an optional enhancement to the monitoring stack that enables you to monitor services and pods in user-defined projects. This feature includes the following components: Table 1.2. Components for monitoring user-defined projects Component Description Prometheus Operator The Prometheus Operator (PO) in the openshift-user-workload-monitoring project creates, configures, and manages Prometheus and Thanos Ruler instances in the same project. Prometheus Prometheus is the monitoring system through which monitoring is provided for user-defined projects. Prometheus sends alerts to Alertmanager for processing. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform , Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Alertmanager The Alertmanager service handles alerts received from Prometheus and Thanos Ruler. Alertmanager is also responsible for sending user-defined alerts to external notification systems. Deploying this service is optional. Note The components in the preceding table are deployed after monitoring is enabled for user-defined projects. All of these components are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. 1.2.3.1. Monitoring targets for user-defined projects When monitoring is enabled for user-defined projects, you can monitor: Metrics provided through service endpoints in user-defined projects. Pods running in user-defined projects. 1.2.4. The monitoring stack in high-availability clusters By default, in multi-node clusters, the following components run in high-availability (HA) mode to prevent data loss and service interruption: Prometheus Alertmanager Thanos Ruler Thanos Querier Metrics Server Monitoring plugin The component is replicated across two pods, each running on a separate node. This means that the monitoring stack can tolerate the loss of one pod. Prometheus in HA mode Both replicas independently scrape the same targets and evaluate the same rules. The replicas do not communicate with each other. Therefore, data might differ between the pods. Alertmanager in HA mode The two replicas synchronize notification and silence states with each other. This ensures that each notification is sent at least once. If the replicas fail to communicate or if there is an issue on the receiving side, notifications are still sent, but they might be duplicated. Important Prometheus, Alertmanager, and Thanos Ruler are stateful components. To ensure high availability, you must configure them with persistent storage. Additional resources High-availability or single-node cluster detection and support Configuring persistent storage Configuring performance and scalability 1.2.5. Glossary of common terms for OpenShift Container Platform monitoring This glossary defines common terms that are used in OpenShift Container Platform architecture. Alertmanager Alertmanager handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. Alerting rules Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys and manages Prometheus instances such as, the Thanos Querier, the Telemeter Client, and metrics targets to ensure that they are up to date. The CMO is deployed by the Cluster Version Operator (CVO). Cluster Version Operator The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container A container is a lightweight and executable image that includes software and all its dependencies. Containers virtualize the operating system. As a result, you can run containers anywhere from a data center to a public or private cloud as well as a developer's laptop. custom resource (CR) A CR is an extension of the Kubernetes API. You can create custom resources. etcd etcd is the key-value store for OpenShift Container Platform, which stores the state of all resource objects. Fluentd Fluentd is a log collector that resides on each OpenShift Container Platform node. It gathers application, infrastructure, and audit logs and forwards them to different outputs. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Kubelets Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Kubernetes controller manager Kubernetes controller manager governs the state of the cluster. Kubernetes scheduler Kubernetes scheduler allocates pods to nodes. labels Labels are key-value pairs that you can use to organize and select subsets of objects such as a pod. Metrics Server The Metrics Server monitoring component collects resource metrics and exposes them in the metrics.k8s.io Metrics API service for use by other tools and APIs, which frees the core platform Prometheus stack from handling this functionality. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Operator Lifecycle Manager (OLM) OLM helps you install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. Persistent volume claim (PVC) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Silences A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Vector Vector is a log collector that deploys to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. web console A user interface (UI) to manage OpenShift Container Platform. 1.2.6. Additional resources About remote health monitoring Granting users permissions for monitoring for user-defined projects Configuring TLS security profiles 1.3. Understanding the monitoring stack - key concepts Get familiar with the OpenShift Container Platform monitoring concepts and terms. Learn about how you can improve performance and scale of your cluster, store and record data, manage metrics and alerts, and more. 1.3.1. About performance and scalability You can optimize the performance and scale of your clusters. You can configure the default monitoring stack by performing any of the following actions: Control the placement and distribution of monitoring components: Use node selectors to move components to specific nodes. Assign tolerations to enable moving components to tainted nodes. Use pod topology spread constraints. Set the body size limit for metrics scraping. Manage CPU and memory resources. Use metrics collection profiles. Additional resources Configuring performance and scalability for core platform monitoring Configuring performance and scalability for user workload monitoring 1.3.1.1. Using node selectors to move monitoring components By using the nodeSelector constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. How node selectors work with other constraints If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster: Topology spread constraints might be in place to control pod placement. Hard anti-affinity rules are in place for Prometheus, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available. When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes. Therefore, if you configure a node selector constraint but existing constraints cannot all be satisfied, the pod scheduler cannot match all constraints and will not schedule a pod for placement onto a node. To maintain resilience and high availability for monitoring components, ensure that enough nodes are available and match all constraints when you configure a node selector constraint to move a component. 1.3.1.2. About pod topology spread constraints for monitoring You can use pod topology spread constraints to control how the monitoring pods are spread across a network topology when OpenShift Container Platform pods are deployed in multiple availability zones. Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios. You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. 1.3.1.3. About specifying limits and requests for monitoring components You can configure resource limits and requests for the following core platform monitoring components: Alertmanager kube-state-metrics monitoring-plugin node-exporter openshift-state-metrics Prometheus Metrics Server Prometheus Operator and its admission webhook service Telemeter Client Thanos Querier You can configure resource limits and requests for the following components that monitor user-defined projects: Alertmanager Prometheus Thanos Ruler By defining the resource limits, you limit a container's resource usage, which prevents the container from exceeding the specified maximum values for CPU and memory resources. By defining the resource requests, you specify that a container can be scheduled only on a node that has enough CPU and memory resources available to match the requested resources. 1.3.1.4. About metrics collection profiles Important Metrics collection profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, Prometheus collects metrics exposed by all default metrics targets in OpenShift Container Platform components. However, you might want Prometheus to collect fewer metrics from a cluster in certain scenarios: If cluster administrators require only alert, telemetry, and console metrics and do not require other metrics to be available. If a cluster increases in size, and the increased size of the default metrics data collected now requires a significant increase in CPU and memory resources. You can use a metrics collection profile to collect either the default amount of metrics data or a minimal amount of metrics data. When you collect minimal metrics data, basic monitoring features such as alerting continue to work. At the same time, the CPU and memory resources required by Prometheus decrease. You can enable one of two metrics collection profiles: full : Prometheus collects metrics data exposed by all platform components. This setting is the default. minimal : Prometheus collects only the metrics data required for platform alerts, recording rules, telemetry, and console dashboards. 1.3.2. About storing and recording data You can store and record data to help you protect the data and use them for troubleshooting. You can configure the default monitoring stack by performing any of the following actions: Configure persistent storage: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. Modify the retention time and size for Prometheus and Thanos Ruler metrics data. Configure logging to help you troubleshoot issues with your cluster: Configure audit logs for Metrics Server. Set log levels for monitoring. Enable the query logging for Prometheus and Thanos Querier. Additional resources Storing and recording data for core platform monitoring Storing and recording data for user workload monitoring 1.3.2.1. Retention time and size for Prometheus metrics By default, Prometheus retains metrics data for the following durations: Core platform monitoring : 15 days Monitoring for user-defined projects : 24 hours You can modify the retention time for the Prometheus instance to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. Note the following behaviors of these data retention settings: The size-based retention policy applies to all data block directories in the /prometheus directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks. Data in the /wal and /head_chunks directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. Thus, if you set a retention size limit lower than the maximum size set for the /wal and /head_chunks directories, you have configured the system not to retain any data blocks in the /prometheus data directories. The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data. If you do not explicitly define values for either retention or retentionSize , retention time defaults to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. If you define values for both retention and retentionSize , both values apply. If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks. If you define a value for retentionSize and do not define retention , only the retentionSize value applies. If you do not define a value for retentionSize and only define a value for retention , only the retention value applies. If you set the retentionSize or retention value to 0 , the default settings apply. The default settings set retention time to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. 1.3.3. Understanding metrics In OpenShift Container Platform 4.17, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. Metrics enable you to monitor how cluster components and your own workloads are performing. You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level. In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can list all available metrics for a service by running a curl query against http://<endpoint>/metrics . For instance, you can expose a route to the prometheus-example-app example application and then run the following to view all of its available metrics: USD curl http://<example_app_endpoint>/metrics Example output # HELP http_requests_total Count of all HTTP requests # TYPE http_requests_total counter http_requests_total{code="200",method="get"} 4 http_requests_total{code="404",method="get"} 2 # HELP version Version information about this binary # TYPE version gauge version{version="v0.1.0"} 1 Additional resources Configuring metrics for core platform monitoring Configuring metrics for user workload monitoring Accessing metrics as an administrator Accessing metrics as a developer 1.3.3.1. Controlling the impact of unbound metrics attributes in user-defined projects Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. 1.3.3.2. Adding cluster ID labels to metrics If you manage multiple OpenShift Container Platform clusters and use the remote write feature to send metrics data from these clusters to an external storage location, you can add cluster ID labels to identify the metrics data coming from different clusters. You can then query these labels to identify the source cluster for a metric and distinguish that data from similar metrics data sent by other clusters. This way, if you manage many clusters for multiple customers and send metrics data to a single centralized storage system, you can use cluster ID labels to query metrics for a particular cluster or customer. Creating and using cluster ID labels involves three general steps: Configuring the write relabel settings for remote write storage. Adding cluster ID labels to the metrics. Querying these labels to identify the source cluster or customer for a metric. 1.3.4. About monitoring dashboards OpenShift Container Platform provides a set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. Additional resources Reviewing monitoring dashboards as a cluster administrator Reviewing monitoring dashboards as a developer 1.3.4.1. Monitoring dashboards in the Administrator perspective Use the Administrator perspective to access dashboards for the core OpenShift Container Platform components, including the following items: API performance etcd Kubernetes compute resources Kubernetes network resources Prometheus USE method dashboards relating to cluster and node performance Node performance metrics Figure 1.1. Example dashboard in the Administrator perspective 1.3.4.2. Monitoring dashboards in the Developer perspective In the Developer perspective, you can access only the Kubernetes compute resources dashboards: Figure 1.2. Example dashboard in the Developer perspective 1.3.5. Managing alerts In the OpenShift Container Platform, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OpenShift Container Platform cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. Additional resources Configuring alerts and notifications for core platform monitoring Configuring alerts and notifications for user workload monitoring Managing alerts as an Administrator Managing alerts as a Developer 1.3.5.1. Managing silences You can create a silence for an alert in the OpenShift Container Platform web console in both the Administrator and Developer perspectives. After you create a silence, you will not receive notifications about an alert when the alert fires. Creating silences is useful in scenarios where you have received an initial alert notification, and you do not want to receive further notifications during the time in which you resolve the underlying issue causing the alert to fire. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. After you create silences, you can view, edit, and expire them. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. 1.3.5.2. Managing alerting rules for core platform monitoring The OpenShift Container Platform monitoring includes a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways: Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. For example, you can change the severity label for an alert from warning to critical to help you route and triage issues flagged by an alert. Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the openshift-monitoring namespace. Core platform alerting rule considerations New alerting rules must be based on the default OpenShift Container Platform monitoring metrics. You must create the AlertingRule and AlertRelabelConfig objects in the openshift-monitoring namespace. You can only add and modify alerting rules. You cannot create new recording rules or modify existing recording rules. If you modify existing platform alerting rules by using an AlertRelabelConfig object, your modifications are not reflected in the Prometheus alerts API. Therefore, any dropped alerts still appear in the OpenShift Container Platform web console even though they are no longer forwarded to Alertmanager. Additionally, any modifications to alerts, such as a changed severity label, do not appear in the web console. 1.3.5.3. Tips for optimizing alerting rules for core platform monitoring If you customize core platform alerting rules to meet your organization's specific needs, follow these guidelines to help ensure that the customized rules are efficient and effective. Minimize the number of new rules . Create only rules that are essential to your specific requirements. By minimizing the number of rules, you create a more manageable and focused alerting system in your monitoring environment. Focus on symptoms rather than causes . Create rules that notify users of symptoms instead of underlying causes. This approach ensures that users are promptly notified of a relevant symptom so that they can investigate the root cause after an alert has triggered. This tactic also significantly reduces the overall number of rules you need to create. Plan and assess your needs before implementing changes . First, decide what symptoms are important and what actions you want users to take if these symptoms occur. Then, assess existing rules and decide if you can modify any of them to meet your needs instead of creating entirely new rules for each symptom. By modifying existing rules and creating new ones judiciously, you help to streamline your alerting system. Provide clear alert messaging . When you create alert messages, describe the symptom, possible causes, and recommended actions. Include unambiguous, concise explanations along with troubleshooting steps or links to more information. Doing so helps users quickly assess the situation and respond appropriately. Include severity levels . Assign severity levels to your rules to indicate how a user needs to react when a symptom occurs and triggers an alert. For example, classifying an alert as Critical signals that an individual or a critical response team needs to respond immediately. By defining severity levels, you help users know how to respond to an alert and help ensure that the most urgent issues receive prompt attention. 1.3.5.4. About creating alerting rules for user-defined projects If you create alerting rules for a user-defined project, consider the following key behaviors and important limitations when you define the new rules: A user-defined alerting rule can include metrics exposed by its own project in addition to the default metrics from core platform monitoring. You cannot include metrics from another user-defined project. For example, an alerting rule for the ns1 user-defined project can use metrics exposed by the ns1 project in addition to core platform metrics, such as CPU and memory metrics. However, the rule cannot include metrics from a different ns2 user-defined project. To reduce latency and to minimize the load on core platform monitoring components, you can add the openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus label to a rule. This label forces only the Prometheus instance deployed in the openshift-user-workload-monitoring project to evaluate the alerting rule and prevents the Thanos Ruler instance from doing so. Important If an alerting rule has this label, your alerting rule can use only those metrics exposed by your user-defined project. Alerting rules you create based on default platform metrics might not trigger alerts. 1.3.5.5. Managing alerting rules for user-defined projects In OpenShift Container Platform, you can view, edit, and remove alerting rules in user-defined projects. Alerting rule considerations The default alerting rules are used specifically for the OpenShift Container Platform cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 1.3.5.6. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. 1.3.5.7. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. 1.3.5.7.1. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 1.3.5.7.2. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OpenShift Container Platform and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. 1.3.5.7.3. Understanding alerting rule filters In the Administrator perspective, the Alerting rules page in the Alerting UI provides details about alerting rules relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert state filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 1.3.5.7.4. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 1.3.6. Understanding alert routing for user-defined projects As a cluster administrator, you can enable alert routing for user-defined projects. With this feature, you can allow users with the alert-routing-edit cluster role to configure alert notification routing and receivers for user-defined projects. These notifications are routed by the default Alertmanager instance or, if enabled, an optional Alertmanager instance dedicated to user-defined monitoring. Users can then create and configure user-defined alert routing by creating or editing the AlertmanagerConfig objects for their user-defined projects without the help of an administrator. After a user has defined alert routing for a user-defined project, user-defined alert notifications are routed as follows: To the alertmanager-main pods in the openshift-monitoring namespace if using the default platform Alertmanager instance. To the alertmanager-user-workload pods in the openshift-user-workload-monitoring namespace if you have enabled a separate instance of Alertmanager for user-defined projects. Note Review the following limitations of alert routing for user-defined projects: For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. Additional resources Enabling alert routing for user-defined projects 1.3.7. Sending notifications to external systems In OpenShift Container Platform 4.17, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. Additional resources Configuring alert notifications for core platform monitoring Configuring alert notifications for user workload monitoring | [
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring/about-openshift-container-platform-monitoring |
Migrating from version 3 to 4 | Migrating from version 3 to 4 OpenShift Container Platform 4.17 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/index |
Chapter 6. Mapping API environments in 3scale API Management | Chapter 6. Mapping API environments in 3scale API Management An API provider gives access to the APIs managed through the 3scale Admin Portal. You then deploy the API backends in many environments. API backend environments include the following: Different environments used for development, quality assurance (QA), staging, and production. Different environments used for teams or departments that manage their own set of API backends. A Red Hat 3scale API Management product represents a single API or subset of an API, but it is also used to map and manage different API backend environments. To find out about mapping API environments for your 3scale product, see the following sections: Product per environment 3scale API Management On-premises instances 3scale API Management mixed approach 3scale API Management with APIcast gateways 6.1. Product per environment This method uses a separate 3scale Product for each API backend environment. In each product, configure a production gateway and a staging gateway, so the changes to the gateway configuration can be tested safely and promoted to the production configuration as you would with your API backends. Configure the product for the API backend environment as follows: Create a backend with a base URL for the API backend for the environment. Add the backend to the product for the environment with a backend path / . Development environment Create development backend Name: Dev Private Base URL: URL of the API backend Create Dev product Production Public Base URL: https://dev-api-backend.yourdomain.com Staging Public Base URL: https://dev-api-backend.yourdomain.com Add Dev Backend with a backend path / QA environment Create QA backend Name: QA Private Base URL: URL of the API backend Create QA product Production Public Base URL: https://qa-api-backend.yourdomain.com Staging Public Base URL: https://qa-api-backend.yourdomain.com Add QA Backend with a backend path / Production environment Create production backend Name: Prod Private Base URL: URL of the API backend Create Prod product Production Public Base URL: https://prod-api-backend.yourdomain.com Staging Public Base URL: https://prod-api-backend.yourdomain.com Add production Backend with a backend path / Additional resources First steps with 3scale API Management . 6.2. 3scale API Management On-premises instances For 3scale On-premises instances, there are multiple ways to set up 3scale to manage API back-end environments. A separate 3scale instance for each API back-end environment A single 3scale instance that uses the multitenancy feature 6.2.1. Separating 3scale API Management instances per environment In this approach, a separate 3scale instance is deployed for each API back-end environment. The benefit of this architecture is that each environment will be isolated from one another, therefore there are no shared databases or other resources. For example, any load testing being done in one environment will not impact the resources in other environments. Note This separation of installations has benefits as described above, however, it would require more operational resources and maintenance. These additional resources would be required on the OpenShift administration layer and not necessarily on the 3scale layer. 6.2.2. Separating 3scale API Management tenants per environment In this approach a single 3scale instance is used but the multitenancy feature is used to support multiple API back ends. There are two options: Create a 1-to-1 mapping between environments and 3scale products within a single tenant. Create a 1-to-1 mapping between environments and tenants with one or more products per tenant as required. There would be three tenants corresponding to API back-end environments - dev-tenant, qa-tenant, prod-tenant. The benefit of this approach is that it allows for a logical separation of environments but uses shared physical resources. Note Shared physical resources will ultimately need to be taken into consideration when analyzing the best strategy for mapping API environments to a single installation with multiple tenants. 6.3. 3scale API Management mixed approach The approaches described in 3scale API Management On-premises instances can be combined. For example: A separate 3scale instance for production. A separate 3scale instance with separate tenant for non-production environments in dev and qa. 6.4. 3scale API Management with APIcast gateways For 3scale On-premises instances, there are two alternatives to set up 3scale to manage API backend environments: Each 3scale installation comes with two built-in APIcast gateways, for staging and production. Deploy additional APIcast gateways externally to the OpenShift cluster where 3scale is running. 6.4.1. APIcast built-in default gateways When APIcast built-in gateways are used, the API back end configured using the above approaches described in 3scale API Management with APIcast gateways will be handled automatically. When a tenant is added by a 3scale Master Admin, a route is created for the tenant in production and staging built-in APIcast gateways. See Understanding multitenancy subdomains <API_NAME>-<TENANT_NAME>-apicast-staging.<WILDCARD_DOMAIN> <API_NAME>-<TENANT_NAME>-apicast-production.<WIDLCARD_DOMAIN> Therefore, each API back-end environment mapped to a different tenant would get its own route. For example: Dev <API_NAME>-dev-apicast-staging.<WILDCARD_DOMAIN> QA <API_NAME>-qa-apicast-staging.<WILDCARD_DOMAIN> Prod <API_NAME>-prod-apicast-staging.<WILDCARD_DOMAIN> 6.4.2. Additional APIcast gateways Additional APIcast gateways are those deployed on a different OpenShift cluster than the one on which 3scale instance is running. There is more than one way to set up and use additional APIcast gateways. The value of environment variable THREESCALE_PORTAL_ENDPOINT used when starting APIcast depends on how the additional APIcast gateways are set up. A separate APIcast gateway can be used for each API back-end environment. For example: The THREESCALE_PORTAL_ENDPOINT is used by APIcast to download the configuration. Each tenant that maps to an API backend environment uses a separate APIcast gateway. The THREESCALE_PORTAL_ENDPOINT is set to the Admin Portal for the tenant containing all the product configurations specific to that API backend environment. A single APIcast gateway can be used with multiple API back-end environments. In this case, THREESCALE_PORTAL_ENDPOINT is set to the Master Admin Portal . Additional resources API provider Product | [
"Production Product => Production Product APIcast gateway => Production Product API upstream Staging Product => Staging Product APIcast gateway => Staging Product API upstream",
"DEV_APICAST -> DEV_TENANT ; DEV_APICAST started with THREESCALE_PORTAL_ENDPOINT = admin portal for DEV_TENANT QA_APICAST -> QA_TENANT ; QA_APICAST started with THREESCALE_PORTAL_ENDPOINT = admin portal for QA_APICAST PROD_APICAST -> PROD_TENANT ; PROD_APICAST started with THREESCALE_PORTAL_ENDPOINT = admin portal for PROD_APICAST"
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/mapping-api-environments-threescale |
Chapter 9. Feature gates | Chapter 9. Feature gates Streams for Apache Kafka operators use feature gates to enable or disable specific features and functions. Enabling a feature gate alters the behavior of the associated operator, introducing the corresponding feature to your Streams for Apache Kafka deployment. The purpose of feature gates is to facilitate the trial and testing of a feature before it is fully adopted. The state (enabled or disabled) of a feature gate may vary by default, depending on its maturity level. As a feature gate graduates and reaches General Availability (GA), it transitions to an enabled state by default and becomes a permanent part of the Streams for Apache Kafka deployment. A feature gate at the GA stage cannot be disabled. The supported feature gates are applicable to all Streams for Apache Kafka operators. While a particular feature gate might be used by one operator and ignored by the others, it can still be configured in all operators. When deploying the User Operator and Topic Operator within the context of the`Kafka` custom resource, the Cluster Operator automatically propagates the feature gates configuration to them. When the User Operator and Topic Operator are deployed standalone, without a Cluster Operator available to configure the feature gates, they must be directly configured within their deployments. 9.1. Graduated feature gates (GA) Graduated feature gates have reached General Availability (GA) and are permanently enabled features. 9.1.1. ControlPlaneListener feature gate The ControlPlaneListener feature gate separates listeners for data replication and coordination: Connections between the Kafka controller and brokers use an internal control plane listener on port 9090. Replication of data between brokers, as well as internal connections from Streams for Apache Kafka operators, Cruise Control, or the Kafka Exporter use a replication listener on port 9091. Important With the ControlPlaneListener feature gate permanently enabled, direct upgrades or downgrades between Streams for Apache Kafka 1.7 and earlier and Streams for Apache Kafka 2.3 and newer are not possible. You must first upgrade or downgrade through one of the Streams for Apache Kafka versions in-between, disable the ControlPlaneListener feature gate, and then downgrade or upgrade (with the feature gate enabled) to the target version. 9.1.2. ServiceAccountPatching feature gate The ServiceAccountPatching feature gate ensures that the Cluster Operator always reconciles service accounts and updates them when needed. For example, when you change service account labels or annotations using the template property of a custom resource, the operator automatically updates them on the existing service account resources. 9.1.3. UseStrimziPodSets feature gate The UseStrimziPodSets feature gate introduced the StrimziPodSet custom resource for managing Kafka and ZooKeeper pods, replacing the use of OpenShift StatefulSet resources. Important With the UseStrimziPodSets feature gate permanently enabled, direct downgrades from Streams for Apache Kafka 2.5 and newer to Streams for Apache Kafka 2.0 or earlier are not possible. You must first downgrade through one of the Streams for Apache Kafka versions in-between, disable the UseStrimziPodSets feature gate, and then downgrade to Streams for Apache Kafka 2.0 or earlier. 9.1.4. StableConnectIdentities feature gate The StableConnectIdentities feature gate introduced the StrimziPodSet custom resource for managing Kafka Connect and Kafka MirrorMaker 2 pods, replacing the use of OpenShift Deployment resources. StrimziPodSet resources give the pods stable names and stable addresses, which do not change during rolling upgrades, replacing the use of OpenShift Deployment resources. Important With the StableConnectIdentities feature gate permanently enabled, direct downgrades from Streams for Apache Kafka 2.7 and newer to Streams for Apache Kafka 2.3 or earlier are not possible. You must first downgrade through one of the Streams for Apache Kafka versions in-between, disable the StableConnectIdentities feature gate, and then downgrade to Streams for Apache Kafka 2.3 or earlier. 9.1.5. KafkaNodePools feature gate The KafkaNodePools feature gate introduced a new KafkaNodePool custom resource that enables the configuration of different pools of Apache Kafka nodes. A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings such as the number of replicas, storage configuration, and a list of assigned roles. You can assign the controller role, broker role, or both roles to all nodes in the pool using the .spec.roles property. When used with a ZooKeeper-based Apache Kafka cluster, it must be set to the broker role. When used with a KRaft-based Apache Kafka cluster, it can be set to broker , controller , or both. In addition, a node pool can have its own configuration of resource requests and limits, Java JVM options, and resource templates. Configuration options not set in the KafkaNodePool resource are inherited from the Kafka custom resource. The KafkaNodePool resources use a strimzi.io/cluster label to indicate to which Kafka cluster they belong. The label must be set to the name of the Kafka custom resource. The Kafka resource configuration must also include the strimzi.io/node-pools: enabled annotation, which is required when using node pools. Examples of the KafkaNodePool resources can be found in the example configuration files provided by Streams for Apache Kafka. Downgrading from KafkaNodePools If your cluster already uses KafkaNodePool custom resources, and you wish to downgrade to an older version of Streams for Apache Kafka that does not support them or with the KafkaNodePools feature gate disabled, you must first migrate from KafkaNodePool custom resources to managing Kafka nodes using only Kafka custom resources. For more information, see the instructions for reversing a migration to node pools . 9.1.6. UnidirectionalTopicOperator feature gate The UnidirectionalTopicOperator feature gate introduced a unidirectional topic management mode for creating Kafka topics using the KafkaTopic resource. Unidirectional mode is compatible with using KRaft for cluster management. With unidirectional mode, you create Kafka topics using the KafkaTopic resource, which are then managed by the Topic Operator. Any configuration changes to a topic outside the KafkaTopic resource are reverted. For more information on topic management, see Section 11.1, "Topic management" . 9.1.7. UseKRaft feature gate The UseKRaft feature gate introduced the KRaft (Kafka Raft metadata) mode for running Apache Kafka clusters without ZooKeeper. ZooKeeper and KRaft are mechanisms used to manage metadata and coordinate operations in Kafka clusters. KRaft mode eliminates the need for an external coordination service like ZooKeeper. In KRaft mode, Kafka nodes take on the roles of brokers, controllers, or both. They collectively manage the metadata, which is replicated across partitions. Controllers are responsible for coordinating operations and maintaining the cluster's state. For more information on using KRraft, see Chapter 2, Using Kafka in KRaft mode . 9.2. Stable feature gates (Beta) Stable feature gates have reached a beta level of maturity, and are generally enabled by default for all users. Stable feature gates are production-ready, but they can still be disabled. 9.2.1. ContinueReconciliationOnManualRollingUpdateFailure feature gate The ContinueReconciliationOnManualRollingUpdateFailure feature gate has a default state of enabled . The ContinueReconciliationOnManualRollingUpdateFailure feature gate allows the Cluster Operator to continue a reconciliation if the manual rolling update of the operands fails. It applies to the following operands that support manual rolling updates using the strimzi.io/manual-rolling-update annotation: ZooKeeper Kafka Kafka Connect Kafka MirrorMaker 2 Continuing the reconciliation after a manual rolling update failure allows the operator to recover from various situations that might prevent the update from succeeding. For example, a missing Persistent Volume Claim (PVC) or Persistent Volume (PV) might cause the manual rolling update to fail. However, the PVCs and PVs are created only in a later stage of the reconciliation. By continuing the reconciliation after this failure, the process can recreate the missing PVC or PV and recover. The ContinueReconciliationOnManualRollingUpdateFailure feature gate is used by the Cluster Operator. It is ignored by the User and Topic Operators. Disabling the ContinueReconciliationOnManualRollingUpdateFailure feature gate To disable the ContinueReconciliationOnManualRollingUpdateFailure feature gate, specify -ContinueReconciliationOnManualRollingUpdateFailure in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration. 9.3. Early access feature gates (Alpha) Early access feature gates have not yet reached the beta stage, and are disabled by default. An early access feature gate provides an opportunity for assessment before its functionality is permanently incorporated into Streams for Apache Kafka. Currently, there are no alpha level feature gates. 9.4. Enabling feature gates To modify a feature gate's default state, use the STRIMZI_FEATURE_GATES environment variable in the operator's configuration. You can modify multiple feature gates using this single environment variable. Specify a comma-separated list of feature gate names and prefixes. A + prefix enables the feature gate and a - prefix disables it. Example feature gate configuration that enables FeatureGate1 and disables FeatureGate2 env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2 9.5. Feature gate releases Feature gates have three stages of maturity: Alpha - typically disabled by default Beta - typically enabled by default General Availability (GA) - typically always enabled Alpha stage features might be experimental or unstable, subject to change, or not sufficiently tested for production use. Beta stage features are well tested and their functionality is not likely to change. GA stage features are stable and should not change in the future. Alpha and beta stage features are removed if they do not prove to be useful. The ControlPlaneListener feature gate moved to GA stage in Streams for Apache Kafka 2.3. It is now permanently enabled and cannot be disabled. The ServiceAccountPatching feature gate moved to GA stage in Streams for Apache Kafka 2.3. It is now permanently enabled and cannot be disabled. The UseStrimziPodSets feature gate moved to GA stage in Streams for Apache Kafka 2.5 and the support for StatefulSets is completely removed. It is now permanently enabled and cannot be disabled. The StableConnectIdentities feature gate moved to GA stage in Streams for Apache Kafka 2.7. It is now permanently enabled and cannot be disabled. The KafkaNodePools feature gate moved to GA stage in Streams for Apache Kafka 2.8. It is now permanently enabled and cannot be disabled. To use KafkaNodePool resources, you still need to use the strimzi.io/node-pools: enabled annotation on the Kafka custom resources. The UnidirectionalTopicOperator feature gate moved to GA stage in Streams for Apache Kafka 2.8. It is now permanently enabled and cannot be disabled. The UseKRaft feature gate moved to GA stage in Streams for Apache Kafka 2.8. It is now permanently enabled and cannot be disabled. To use KRaft (ZooKeeper-less Apache Kafka), you still need to use the strimzi.io/kraft: enabled annotation on the Kafka custom resources or migrate from an existing ZooKeeper-based cluster. The ContinueReconciliationOnManualRollingUpdateFailure feature was introduced in Streams for Apache Kafka 2.8 and moved to beta stage in Streams for Apache Kafka 0.44.0. It is now enabled by default, but can be disabled if needed. Note Feature gates might be removed when they reach GA. This means that the feature was incorporated into the Streams for Apache Kafka core features and can no longer be disabled. Table 9.1. Feature gates and the Streams for Apache Kafka versions when they moved to alpha, beta, or GA Feature gate Alpha Beta GA ControlPlaneListener 1.8 2.0 2.3 ServiceAccountPatching 1.8 2.0 2.3 UseStrimziPodSets 2.1 2.3 2.5 UseKRaft 2.2 2.7 2.8 StableConnectIdentities 2.4 2.6 2.7 KafkaNodePools 2.5 2.7 2.8 UnidirectionalTopicOperator 2.5 2.7 2.8 ContinueReconciliationOnManualRollingUpdateFailure 2.8 2.9 - If a feature gate is enabled, you may need to disable it before upgrading or downgrading from a specific Streams for Apache Kafka version (or first upgrade / downgrade to a version of Streams for Apache Kafka where it can be disabled). The following table shows which feature gates you need to disable when upgrading or downgrading Streams for Apache Kafka versions. Table 9.2. Feature gates to disable when upgrading or downgrading Streams for Apache Kafka Disable Feature gate Upgrading from Streams for Apache Kafka version Downgrading to Streams for Apache Kafka version ControlPlaneListener 1.7 and earlier 1.7 and earlier UseStrimziPodSets - 2.0 and earlier StableConnectIdentities - 2.3 and earlier | [
"env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/ref-operator-cluster-feature-gates-str |
Using Designate for DNS-as-a-Service | Using Designate for DNS-as-a-Service Red Hat OpenStack Platform 17.0 Information about how to manage a domain name system (DNS) using the DNS service in Red Hat OpenStack Platform OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/index |
7.10. augeas | 7.10. augeas 7.10.1. RHBA-2015:1256 - augeas bug fix and enhancement update Updated augeas packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. Augeas is a utility for editing configuration. Augeas parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native configuration files. Augeas also uses "lenses" as basic building blocks for establishing the mapping from files into the Augeas tree and back. Bug Fixes BZ# 1112388 Previously, some cgroup controller types used in the /etc/cgconfig.conf file were not recognized. As a consequence, parsing error occurred in Augeas and an error message was returned. With this update, the Augeas module can parse files containing these controller names as expected. BZ# 1121263 Entries in the /etc/services file containing colons in the service name prevented Augeas from parsing the file. This update makes sure that the "service_name" field in the services.aug file is able to support the colon character, and the aforementioned entries can now be parsed successfully. BZ# 1129508 When entries in /etc/rsyslog.conf were configured for remote logging over Transmission Control Protocol (TCP), Augeas was unable to parse the file. The underlying source code has been fixed, and files containing this configuration are now parsed successfully. BZ# 1144652 By default, the /etc/sysconfig/iptables.save file was parsed by the wrong module, which led to a parsing failure and an error reported by Augeas. The wrong module has been substituted with a correct one, and /etc/sysconfig/iptables.save is now parsed correctly by default. BZ# 1175854 Previously, the Augeas utility did not correctly parse the "ssh" and "fence_kdump_*" parameters in the /etc/kdump.conf file. As a consequence, using Augeas to edit these parameters in kdump configuration failed. With this update, Augeas has been updated to parse "ssh" and "fence_kdump_*" as intended, and the described problem no longer occurs. BZ# 1186318 Previously, the aug_match API returned paths of files and nodes with special characters unescaped, unsuitable for use in further API calls. Consequently, specially constructed file names could cause programs built on Augeas to function incorrectly, and implementing escaping in such programs was impossible. With this update, Augeas escapes paths returned from aug_match correctly, and paths returned from aug_match can be used safely and reliably in further API calls. BZ# 1203597 Prior to this update, Augeas was unable to parse the /etc/krb5.conf configuration files containing values with curly brackets ("{}"). To fix this bug, Augeas lens (parser) has been fixed to handle these characters in krb5.conf setting values, and Augeas can now parse these krb5.conf files as expected. BZ# 1209885 Previously. Augeas was unable to parse the .properties (Java-style) files containing a multi-line value that begins with a blank line. Augeas lens (parser) has been fixed to accept an empty starting line, thus fixing this bug. Enhancement BZ# 1160261 A lens for the /etc/shadow file format has been added to Augeas to parse the shadow password file. Users of augeas are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-augeas |
16.7. Assigning GPU Devices | 16.7. Assigning GPU Devices To assign a GPU to a guest, use one of the following method: GPU PCI Device Assignment - Using this method, it is possible to remove a GPU device from the host and assign it to a single guest. NVIDIA vGPU Assignment - This method makes it possible to create multiple mediated devices from a physical GPU, and assign these devices as virtual GPUs to multiple guests. This is only supported on selected NVIDIA GPUs, and only one mediated device can be assigned to a single guest. 16.7.1. GPU PCI Device Assignment Red Hat Enterprise Linux 7 supports PCI device assignment of the following PCIe-based GPU devices as non-VGA graphics devices: NVIDIA Quadro K-Series, M-Series, P-Series, and later architectures (models 2000 series or later) NVIDIA Tesla K-Series, M-Series, and later architectures Note The number of GPUs that can be attached to a VM is limited by the maximum number of assigned PCI devices, which in RHEL 7 is currently 32. However, attaching multiple GPUs to a virtual machine is likely to cause problems with memory-mapped I/O (MMIO) on the guest, which may result in the GPUs not being available to the VM. To work around these problems, set a larger 64-bit MMIO space and configure the vCPU physical address bits to make the extended 64-bit MMIO space addressable. To assign a GPU to a guest virtual machine, you must enable the I/O Memory Management Unit (IOMMU) on the host machine, identify the GPU device by using the lspci command, detach the device from host, attach it to the guest, and configure Xorg on the guest - as described in the following procedures: Procedure 16.13. Enable IOMMU support in the host machine kernel Edit the kernel command line For an Intel VT-d system, IOMMU is activated by adding the intel_iommu=on and iommu=pt parameters to the kernel command line. For an AMD-Vi system, the option needed is only iommu=pt . To enable this option, edit or add the GRUB_CMDLINX_LINUX line to the /etc/sysconfig/grub configuration file as follows: Note For further information on IOMMU, see Appendix E, Working with IOMMU Groups . Regenerate the boot loader configuration For the changes to the kernel command line to apply, regenerate the boot loader configuration using the grub2-mkconfig command: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Reboot the host For the changes to take effect, reboot the host machine: Procedure 16.14. Excluding the GPU device from binding to the host physical machine driver For GPU assignment, it is recommended to exclude the device from binding to host drivers, as these drivers often do not support dynamic unbinding of the device. Identify the PCI bus address To identify the PCI bus address and IDs of the device, run the following lspci command. In this example, a VGA controller such as an NVIDIA Quadro or GRID card is used: The resulting search reveals that the PCI bus address of this device is 0000:02:00.0 and the PCI IDs for the device are 10de:11fa. Prevent the native host machine driver from using the GPU device To prevent the native host machine driver from using the GPU device, you can use a PCI ID with the pci-stub driver. To do this, append the pci-stub.ids option, with the PCI IDs as its value, to the GRUB_CMDLINX_LINUX line located in the /etc/sysconfig/grub configuration file, for example as follows: To add additional PCI IDs for pci-stub, separate them with a comma. Regenerate the boot loader configuration Regenerate the boot loader configuration using the grub2-mkconfig to include this option: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Reboot the host machine In order for the changes to take effect, reboot the host machine: Procedure 16.15. Optional: Editing the GPU IOMMU configuration Prior to attaching the GPU device, editing its IOMMU configuration may be needed for the GPU to work properly on the guest. Display the XML information of the GPU To display the settings of the GPU in XML form, you first need to convert its PCI bus address to libvirt-compatible format by appending pci_ and converting delimiters to underscores. In this example, the GPU PCI device identified with the 0000:02:00.0 bus address (as obtained in the procedure ) becomes pci_0000_02_00_0 . Use the libvirt address of the device with the virsh nodedev-dumpxml to display its XML configuration: <device> <name>pci_0000_02_00_0</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path> <parent>pci_0000_00_03_0</parent> <driver> <name>pci-stub</name> </driver> <capability type='pci'> <domain>0</domain> <bus>2</bus> <slot>0</slot> <function>0</function> <product id='0x11fa'>GK106GL [Quadro K4000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <!-- pay attention to the following lines --> <iommuGroup number='13'> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='16'/> </pci-express> </capability> </device> Note the <iommuGroup> element of the XML. The iommuGroup indicates a set of devices that are considered isolated from other devices due to IOMMU capabilities and PCI bus topologies. All of the endpoint devices within the iommuGroup (meaning devices that are not PCIe root ports, bridges, or switch ports) need to be unbound from the native host drivers in order to be assigned to a guest. In the example above, the group is composed of the GPU device (0000:02:00.0) as well as the companion audio device (0000:02:00.1). For more information, see Appendix E, Working with IOMMU Groups . Adjust IOMMU settings In this example, assignment of NVIDIA audio functions is not supported due to hardware issues with legacy interrupt support. In addition, the GPU audio function is generally not useful without the GPU itself. Therefore, in order to assign the GPU to a guest, the audio function must first be detached from native host drivers. This can be done using one of the following: Detect the PCI ID for the device and append it to the pci-stub.ids option in the /etc/sysconfig/grub file, as detailed in Procedure 16.14, "Excluding the GPU device from binding to the host physical machine driver" Use the virsh nodedev-detach command, for example as follows: Procedure 16.16. Attaching the GPU The GPU can be attached to the guest using any of the following methods: Using the Virtual Machine Manager interface. For details, see Section 16.1.2, "Assigning a PCI Device with virt-manager" . Creating an XML configuration fragment for the GPU and attaching it with the virsh attach-device : Create an XML for the device, similar to the following: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> </hostdev> Save this to a file and run virsh attach-device [domain] [file] --persistent to include the XML in the guest configuration. Note that the assigned GPU is added in addition to the existing emulated graphics device in the guest machine. The assigned GPU is handled as a secondary graphics device in the virtual machine. Assignment as a primary graphics device is not supported and emulated graphics devices in the guest's XML should not be removed. Editing the guest XML configuration using the virsh edit command and adding the appropriate XML segment manually. Procedure 16.17. Modifying the Xorg configuration on the guest The GPU's PCI bus address on the guest will be different than on the host. To enable the host to use the GPU properly, configure the guest's Xorg display server to use the assigned GPU address: In the guest, use the lspci command to determine the PCI bus adress of the GPU: In this example, the bus address is 00:09.0. In the /etc/X11/xorg.conf file on the guest, add a BusID option with the detected address adjusted as follows: Important If the bus address detected in Step 1 is hexadecimal, you need to convert the values between delimiters to the decimal system. For example, 00:0a.0 should be converted into PCI:0:10:0. Note When using an assigned NVIDIA GPU in the guest, only the NVIDIA drivers are supported. Other drivers may not work and may generate errors. For a Red Hat Enterprise Linux 7 guest, the nouveau driver can be blacklisted using the option modprobe.blacklist=nouveau on the kernel command line during install. For information on other guest virtual machines, see the operating system's specific documentation. Depending on the guest operating system, with the NVIDIA drivers loaded, the guest may support using both the emulated graphics and assigned graphics simultaneously or may disable the emulated graphics. Note that access to the assigned graphics framebuffer is not provided by applications such as virt-manager . If the assigned GPU is not connected to a physical display, guest-based remoting solutions may be necessary to access the GPU desktop. As with all PCI device assignment, migration of a guest with an assigned GPU is not supported and each GPU is owned exclusively by a single guest. Depending on the guest operating system, hot plug support of GPUs may be available. 16.7.2. NVIDIA vGPU Assignment The NVIDIA vGPU feature makes it possible to divide a physical GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple guests as virtual GPUs. As a result, these guests share the performance of a single physical GPU. Important This feature is only available on a limited set of NVIDIA GPUs. For an up-to-date list of these devices, see the NVIDIA GPU Software Documentation . 16.7.2.1. NVIDIA vGPU Setup To set up the vGPU feature, you first need to obtain NVIDIA vGPU drivers for your GPU device, then create mediated devices, and assign them to the intended guest machines: Obtain the NVIDIA vGPU drivers and install them on your system. For instructions, see the NVIDIA documentation . If the NVIDIA software installer did not create the /etc/modprobe.d/nvidia-installer-disable-nouveau.conf file, create a .conf file (of any name) in the /etc/modprobe.d/ directory. Add the following lines in the file: blacklist nouveau options nouveau modeset=0 Regenerate the initial ramdisk for the current kernel, then reboot: If you need to use a prior supported kernel version with mediated devices, regenerate the initial ramdisk for all installed kernel versions: Check that the nvidia_vgpu_vfio module has been loaded by the kernel and that the nvidia-vgpu-mgr.service service is running. Write a device UUID to /sys/class/mdev_bus/ pci_dev /mdev_supported_types/ type-id /create , where pci_dev is the PCI address of the host GPU, and type-id is an ID of the host GPU type. The following example shows how to create a mediated device of nvidia-63 vGPU type on an NVIDIA Tesla P4 card: For type-id values for specific devices, see section 1.3.1. Virtual GPU Types in Virtual GPU software documentation . Note that only Q-series NVIDIA vGPUs, such as GRID P4-2Q , are supported as mediated device GPU types on Linux guests. Add the following lines to the <devices/> sections in XML configurations of guests that you want to share the vGPU resources. Use the UUID value generated by the uuidgen command in the step. Each UUID can only be assigned to one guest at a time. <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev> Important For the vGPU mediated devices to work properly on the assigned guests, NVIDIA vGPU guest software licensing needs to be set up for the guests. For further information and instructions, see the NVIDIA virtual GPU software documentation . 16.7.2.2. Setting up and using the VNC console for video streaming with NVIDIA vGPU As a Technology Preview , the Virtual Network Computing (VNC) console can be used with GPU-based mediated devices, including NVIDIA vGPU, in Red Hat Enterprise Linux 7.7 and later. As a result, you can use VNC to display the accelerated graphical output provided by an NVIDIA vGPU device. Warning This feature is currently only provided as a Technology Preview and is not supported by Red Hat. Therefore, using the procedure below in a production environment is heavily discouraged. To configure vGPU output rendering in a VNC console on your virtual machine, do the following: Install NVIDIA vGPU drivers and configure NVIDIA vGPU on your system as described in Section 16.7.2.1, "NVIDIA vGPU Setup" . Ensure the mediated device's XML configuration includes the display='on' parameter. For example: <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/> </source> </hostdev> Optionally, set the VM's video model type as none . For example: <video> <model type='none'/> </video> If this is not specified, you receive two different display outputs - one from an emulated Cirrus or QXL card and one from NVIDIA vGPU. Also note that using model type='none' currently makes it impossible to see the boot graphical output until the drivers are initialized. As a result, the first graphical output displayed is the login screen. Ensure the XML configuration of the VM's graphics type is vnc . For example: <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> Start the virtual machine. Connect to the virtual machine using the VNC viewer remote desktop client. Note If the VM is set up with an emulated VGA as the primary video device and vGPU as the secondary device, use the ctrl+alt+2 keyboard shortcut to switch to the vGPU display. 16.7.2.3. Removing NVIDIA vGPU Devices To remove a mediated vGPU device, use the following command when the device is inactive, and replace uuid with the UUID of the device, for example 30820a6f-b1a5-4503-91ca-0c10ba58692a . Note that attempting to remove a vGPU device that is currently in use by a guest triggers the following error: 16.7.2.4. Querying NVIDIA vGPU Capabilities To obtain additional information about the mediated devices on your system, such as how many mediated devices of a given type can be created, use the virsh nodedev-list --cap mdev_types and virsh nodedev-dumpxml commands. For example, the following displays available vGPU types on a Tesla P4 card: USD virsh nodedev-list --cap mdev_types pci_0000_01_00_0 USD virsh nodedev-dumpxml pci_0000_01_00_0 <...> <capability type='mdev_types'> <type id='nvidia-70'> <name>GRID P4-8A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>1</availableInstances> </type> <type id='nvidia-69'> <name>GRID P4-4A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-67'> <name>GRID P4-1A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-65'> <name>GRID P4-4Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-63'> <name>GRID P4-1Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-71'> <name>GRID P4-1B</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-68'> <name>GRID P4-2A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>4</availableInstances> </type> <type id='nvidia-66'> <name>GRID P4-8Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>1</availableInstances> </type> <type id='nvidia-64'> <name>GRID P4-2Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>4</availableInstances> </type> </capability> </...> 16.7.2.5. Remote Desktop Streaming Services for NVIDIA vGPU The following remote desktop streaming services have been successfully tested for use with the NVIDIA vGPU feature on Red Hat Enterprise Linux 7: HP-RGS Mechdyne TGX - It is currently not possible to use Mechdyne TGX with Windows Server 2016 guests. NICE DCV - When using this streaming service, Red Hat recommends using fixed resolution settings, as using dynamic resolution in some cases results in a black screen. 16.7.2.6. Setting up the VNC console for video streaming with NVIDIA vGPU Introduction As a Technology Preview , the Virtual Network Computing (VNC) console can be used with GPU-based mediated devices, including NVIDIA vGPU, in Red Hat Enterprise Linux 8. As a result, you can use VNC to display the accelerated graphical output provided by an NVIDIA vGPU device. Important Due to being a Technology Preview, this feature is not supported by Red Hat. Therefore, using the procedure below in a production environment is heavily discouraged. Configuration To configure vGPU output rendering in a VNC console on your virtual machine, do the following: Install NVIDIA vGPU drivers and configure NVIDIA vGPU on your host as described in Section 16.7.2, "NVIDIA vGPU Assignment" . Ensure the mediated device's XML configuration includes the display='on' parameter. For example: <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/> </source> </hostdev> Optionally, set the VM's video model type as none . For example: <video> <model type='none'/> </video> Ensure the XML configuration of the VM's graphics type is spice or vnc . An example for spice : <graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics> An example for vnc : <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> Start the virtual machine. Connect to the virtual machine using a client appropriate to the graphics protocol you configured in the steps. For VNC, use the VNC viewer remote desktop client. If the VM is set up with an emulated VGA as the primary video device and vGPU as the secondary, use the ctrl+alt+2 keyboard shortcut to switch to the vGPU display. For SPICE, use the virt-viewer application. | [
"GRUB_CMDLINE_LINUX=\"rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us USD([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) rhgb quiet intel_iommu=on iommu=pt\"",
"grub2-mkconfig -o /etc/grub2.cfg",
"reboot",
"lspci -Dnn | grep VGA 0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [10de:11fa] (rev a1)",
"GRUB_CMDLINE_LINUX=\"rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us USD([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) rhgb quiet intel_iommu=on iommu=pt pci-stub.ids=10de:11fa\"",
"grub2-mkconfig -o /etc/grub2.cfg",
"reboot",
"virsh nodedev-dumpxml pci_0000_02_00_0",
"<device> <name>pci_0000_02_00_0</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path> <parent>pci_0000_00_03_0</parent> <driver> <name>pci-stub</name> </driver> <capability type='pci'> <domain>0</domain> <bus>2</bus> <slot>0</slot> <function>0</function> <product id='0x11fa'>GK106GL [Quadro K4000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <!-- pay attention to the following lines --> <iommuGroup number='13'> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='16'/> </pci-express> </capability> </device>",
"virsh nodedev-detach pci_0000_02_00_1 Device pci_0000_02_00_1 detached",
"<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> </hostdev>",
"lspci | grep VGA 00:02.0 VGA compatible controller: Device 1234:111 00:09.0 VGA compatible controller: NVIDIA Corporation GK106GL [Quadro K4000] (rev a1)",
"Section \"Device\" Identifier \"Device0\" Driver \"nvidia\" VendorName \"NVIDIA Corporation\" BusID \"PCI:0:9:0\" EndSection",
"blacklist nouveau options nouveau modeset=0",
"dracut --force reboot",
"dracut --regenerate-all --force reboot",
"lsmod | grep nvidia_vgpu_vfio nvidia_vgpu_vfio 45011 0 nvidia 14333621 10 nvidia_vgpu_vfio mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 systemctl status nvidia-vgpu-mgr.service nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago Main PID: 1553 (nvidia-vgpu-mgr) [...]",
"uuidgen 30820a6f-b1a5-4503-91ca-0c10ba58692a echo \"30820a6f-b1a5-4503-91ca-0c10ba58692a\" > /sys/class/mdev_bus/0000:01:00.0/mdev_supported_types/nvidia-63/create",
"<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>",
"<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/> </source> </hostdev>",
"<video> <model type='none'/> </video>",
"<graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics>",
"echo 1 > /sys/bus/mdev/devices/ uuid /remove",
"echo: write error: Device or resource busy",
"virsh nodedev-list --cap mdev_types pci_0000_01_00_0 virsh nodedev-dumpxml pci_0000_01_00_0 <...> <capability type='mdev_types'> <type id='nvidia-70'> <name>GRID P4-8A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>1</availableInstances> </type> <type id='nvidia-69'> <name>GRID P4-4A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-67'> <name>GRID P4-1A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-65'> <name>GRID P4-4Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-63'> <name>GRID P4-1Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-71'> <name>GRID P4-1B</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-68'> <name>GRID P4-2A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>4</availableInstances> </type> <type id='nvidia-66'> <name>GRID P4-8Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>1</availableInstances> </type> <type id='nvidia-64'> <name>GRID P4-2Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>4</availableInstances> </type> </capability> </...>",
"<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/> </source> </hostdev>",
"<video> <model type='none'/> </video>",
"<graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics>",
"<graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-device-gpu |
Chapter 1. Provisioning APIs | Chapter 1. Provisioning APIs 1.1. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 1.2. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 1.3. DataImage [metal3.io/v1alpha1] Description DataImage is the Schema for the dataimages API. Type object 1.4. FirmwareSchema [metal3.io/v1alpha1] Description FirmwareSchema is the Schema for the firmwareschemas API. Type object 1.5. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API. Type object 1.6. HostFirmwareComponents [metal3.io/v1alpha1] Description HostFirmwareComponents is the Schema for the hostfirmwarecomponents API. Type object 1.7. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API. Type object 1.8. Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3Remediation is the Schema for the metal3remediations API. Type object 1.9. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 1.10. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API. Type object 1.11. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/provisioning-apis |
Chapter 1. Installing a cluster on any platform | Chapter 1. Installing a cluster on any platform In OpenShift Container Platform version 4.16, you can install a cluster on any infrastructure that you provision, including virtualization and cloud environments. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 1.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 1.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 1.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 1.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 1.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 1.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 1.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 1.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 1.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 1.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 1.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 1.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 1.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 1.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 1.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 1.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 1.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 1.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 1.9.1. Sample install-config.yaml file for other platforms You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 1.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 1.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 1.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 1.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 1.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 1.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 1.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 1.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 1.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 1.11.3.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 1.11.3.4.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 1.11.3.4.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 1.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 1.11.3.4.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 1.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 1.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 1.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 1.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 1.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 1.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 1.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 1.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 1.15.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 1.15.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 1.15.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 1.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 1.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 1.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_any_platform/installing-platform-agnostic |
Storage APIs | Storage APIs OpenShift Container Platform 4.16 Reference guide for storage APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/storage_apis/index |
Chapter 14. Using Kerberos (GSSAPI) authentication | Chapter 14. Using Kerberos (GSSAPI) authentication AMQ Streams supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes. Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC). 14.1. Setting up AMQ Streams to use Kerberos (GSSAPI) authentication This procedure shows how to configure AMQ Streams so that Kafka clients can access Kafka and ZooKeeper using Kerberos (GSSAPI) authentication. The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host. The procedure shows, with examples, how to configure: Service principals Kafka brokers to use the Kerberos login ZooKeeper to use Kerberos login Producer and consumer clients to access Kafka using Kerberos authentication The instructions describe Kerberos set up for a single ZooKeeper and Kafka installation on a single host, with additional configuration for a producer and consumer client. Prerequisites To be able to configure Kafka and ZooKeeper to authenticate and authorize Kerberos credentials, you will need: Access to a Kerberos server A Kerberos client on each Kafka broker host For more information on the steps to set up a Kerberos server, and clients on broker hosts, see the example Kerberos on RHEL set up configuration . How you deploy Kerberos depends on your operating system. Red Hat recommends using Identity Management (IdM) when setting up Kerberos on Red Hat Enterprise Linux. Users of an Oracle or IBM JDK must install a Java Cryptography Extension (JCE). Oracle JCE IBM JCE Add service principals for authentication From your Kerberos server, create service principals (users) for ZooKeeper, Kafka brokers, and Kafka producer and consumer clients. Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM . Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC. For example: zookeeper/[email protected] kafka/[email protected] producer1/[email protected] consumer1/[email protected] The ZooKeeper service principal must have the same hostname as the zookeeper.connect configuration in the Kafka config/server.properties file: zookeeper.connect= node1.example.redhat.com :2181 If the hostname is not the same, localhost is used and authentication will fail. Create a directory on the host and add the keytab files: For example: /opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab Ensure the kafka user can access the directory: chown kafka:kafka -R /opt/kafka/krb5 Configure ZooKeeper to use a Kerberos Login Configure ZooKeeper to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for zookeeper . Create or modify the opt/kafka/config/jaas.conf file to support ZooKeeper client and server operations: Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" 4 principal="zookeeper/[email protected]"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; 1 Set to true to get the principal key from the keytab. 2 Set to true to store the principal key. 3 Set to true to obtain the Ticket Granting Ticket (TGT) from the ticket cache. 4 The keyTab property points to the location of the keytab file copied from the Kerberos KDC. The location and file must be readable by the kafka user. 5 The principal property is configured to match the fully-qualified principal name created on the KDC host, which follows the format SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-NAME . Edit opt/kafka/config/zookeeper.properties to use the updated JAAS configuration: # ... requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20 1 Controls the frequency for login renewal in milliseconds, which can be adjusted to suit ticket renewal intervals. Default is one hour. 2 Dictates whether the hostname is used as part of the login principal name. If using a single keytab for all nodes in the cluster, this is set to true . However, it is recommended to generate a separate keytab and fully-qualified principal for each broker host for troubleshooting. 3 Controls whether the realm name is stripped from the principal name for Kerberos negotiations. It is recommended that this setting is set as false . 4 Enables SASL authentication mechanisms for the ZooKeeper server and client. 5 The RequireSasl properties controls whether SASL authentication is required for quorum events, such as master elections. 6 The loginContext properties identify the name of the login context in the JAAS configuration used for authentication configuration of the specified component. The loginContext names correspond to the names of the relevant sections in the opt/kafka/config/jaas.conf file. 7 Controls the naming convention to be used to form the principal name used for identification. The placeholder _HOST is automatically resolved to the hostnames defined by the server.1 properties at runtime. Start ZooKeeper with JVM parameters to specify the Kerberos login configuration: su - kafka export EXTRA_ARGS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties If you are not using the default service name ( zookeeper ), add the name using the -Dzookeeper.sasl.client.username= NAME parameter. Note If you are using the /etc/krb5.conf location, you do not need to specify -Djava.security.krb5.conf=/etc/krb5.conf when starting ZooKeeper, Kafka, or the Kafka producer and consumer. Configure the Kafka broker server to use a Kerberos login Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka . Modify the opt/kafka/config/jaas.conf file with the following elements: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; Configure each broker in the Kafka cluster by modifying the listener configuration in the config/server.properties file so the listeners use the SASL/GSSAPI login. Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols. For example: # ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 ... 1 Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications. 2 For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the ssl.* properties. 3 SASL mechanism for Kerberos authentication is GSSAPI . 4 Kerberos authentication for inter-broker communication. 5 The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration. Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration: su - kafka export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties If the broker and ZooKeeper cluster were previously configured and working with a non-Kerberos-based authentication system, it is possible to start the ZooKeeper and broker cluster and check for configuration errors in the logs. After starting the broker and Zookeeper instances, the cluster is now configured for Kerberos authentication. Configure Kafka producer and consumer clients to use Kerberos authentication Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1 and consumer1 . Add the Kerberos configuration to the producer or consumer configuration file. For example: /opt/kafka/config/producer.properties # ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/[email protected]"; # ... 1 Configuration for Kerberos (GSSAPI) authentication. 2 Kerberos uses the SASL plaintext (username/password) security protocol. 3 The service principal (user) for Kafka that was configured in the Kerberos KDC. 4 Configuration for the JAAS using the same properties defined in jaas.conf . /opt/kafka/config/consumer.properties # ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/[email protected]"; # ... Run the clients to verify that you can send and receive messages from the Kafka brokers. Producer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Consumer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Additional resources Kerberos man pages: krb5.conf(5), kinit(1), klist(1), and kdestroy(1) Example Kerberos server on RHEL set up configuration Example client application to authenticate with a Kafka cluster using Kerberos tickets | [
"zookeeper.connect= node1.example.redhat.com :2181",
"/opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab",
"chown kafka:kafka -R /opt/kafka/krb5",
"Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" 4 principal=\"zookeeper/[email protected]\"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; };",
"requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20",
"su - kafka export EXTRA_ARGS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; };",
"broker.id=0 listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5",
"su - kafka export KAFKA_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \\ 4 useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/producer1.keytab\" principal=\"producer1/[email protected]\";",
"sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/consumer1.keytab\" principal=\"consumer1/[email protected]\";",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/assembly-kerberos_str |
Appendix A. Using the Load Balancer Add-On with the High Availability Add-On | Appendix A. Using the Load Balancer Add-On with the High Availability Add-On You can use the Load Balancer Add-On with the High Availability Add-On to deploy a high-availability e-commerce site that provides load balancing, data integrity, and application availability. The configuration in Figure A.1, "Load Balancer Add-On with a High Availability Add-On" represents an e-commerce site used for online merchandise ordering through a URL. Client requests to the URL pass through the firewall to the active LVS load-balancing router, which then forwards the requests to one of the Web servers. The High Availability Add-On nodes serve dynamic data to the Web servers, which forward the data to the requesting client. Figure A.1. Load Balancer Add-On with a High Availability Add-On Serving dynamic Web content with Load Balancer Add-On requires a three-tier configuration (as shown in Figure A.1, "Load Balancer Add-On with a High Availability Add-On" ). This combination of Load Balancer Add-On and High Availability Add-On allows for the configuration of a high-integrity, no-single-point-of-failure e-commerce site. The High Availability Add-On can run a high-availability instance of a database or a set of databases that are network-accessible to the Web servers. A three-tier configuration is required to provide dynamic content. While a two-tier Load Balancer Add-On configuration is suitable if the Web servers serve only static Web content (consisting of small amounts of infrequently changing data), a two-tier configuration is not suitable if the Web servers serve dynamic content. Dynamic content could include product inventory, purchase orders, or customer databases, which must be consistent on all the Web servers to ensure that customers have access to up-to-date and accurate information. Each tier provides the following functions: First tier - LVS router performing load-balancing to distribute Web requests. Second tier - A set of Web servers to serve the requests. Third tier - A High Availability Add-On to serve data to the Web servers. In a Load Balancer Add-On configuration like the one in Figure A.1, "Load Balancer Add-On with a High Availability Add-On" , client systems issue requests on the World Wide Web. For security reasons, these requests enter a Web site through a firewall, which can be a Linux system serving in that capacity or a dedicated firewall device. For redundancy, you can configure firewall devices in a failover configuration. Behind the firewall is an LVS router that provides load balancing, which can be configured in an active-standby mode. The active load-balancing router forwards the requests to the set of Web servers. Each Web server can independently process an HTTP request from a client and send the response back to the client. The Load Balancer Add-On enables you to expand a Web site's capacity by adding Web servers behind the LVS router; the LVS router performs load balancing across a wider set of Web servers. In addition, if a Web server fails, it can be removed; Load Balancer Add-On continues to perform load balancing across a smaller set of Web servers. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/ch-clumanager-piranha-VSA |
Chapter 4. DataImage [metal3.io/v1alpha1] | Chapter 4. DataImage [metal3.io/v1alpha1] Description DataImage is the Schema for the dataimages API. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DataImageSpec defines the desired state of DataImage. status object DataImageStatus defines the observed state of DataImage. 4.1.1. .spec Description DataImageSpec defines the desired state of DataImage. Type object Required url Property Type Description url string Url is the address of the dataImage that we want to attach to a BareMetalHost 4.1.2. .status Description DataImageStatus defines the observed state of DataImage. Type object Property Type Description attachedImage object Currently attached DataImage error object Error count and message when attaching/detaching lastReconciled string Time of last reconciliation 4.1.3. .status.attachedImage Description Currently attached DataImage Type object Required url Property Type Description url string 4.1.4. .status.error Description Error count and message when attaching/detaching Type object Required count message Property Type Description count integer message string 4.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/dataimages GET : list objects of kind DataImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages DELETE : delete collection of DataImage GET : list objects of kind DataImage POST : create a DataImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name} DELETE : delete a DataImage GET : read the specified DataImage PATCH : partially update the specified DataImage PUT : replace the specified DataImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name}/status GET : read status of the specified DataImage PATCH : partially update status of the specified DataImage PUT : replace status of the specified DataImage 4.2.1. /apis/metal3.io/v1alpha1/dataimages HTTP method GET Description list objects of kind DataImage Table 4.1. HTTP responses HTTP code Reponse body 200 - OK DataImageList schema 401 - Unauthorized Empty 4.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages HTTP method DELETE Description delete collection of DataImage Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DataImage Table 4.3. HTTP responses HTTP code Reponse body 200 - OK DataImageList schema 401 - Unauthorized Empty HTTP method POST Description create a DataImage Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body DataImage schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 201 - Created DataImage schema 202 - Accepted DataImage schema 401 - Unauthorized Empty 4.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the DataImage HTTP method DELETE Description delete a DataImage Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DataImage Table 4.10. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DataImage Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DataImage Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body DataImage schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 201 - Created DataImage schema 401 - Unauthorized Empty 4.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the DataImage HTTP method GET Description read status of the specified DataImage Table 4.17. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DataImage Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DataImage Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body DataImage schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 201 - Created DataImage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/dataimage-metal3-io-v1alpha1 |
Chapter 9. Other notable changes | Chapter 9. Other notable changes 9.1. Javascript engine available by default on the classpath In the version, when Keycloak was used on Java 17 with Javascript providers (Script authenticator, Javascript authorization policy or Script protocol mappers for OIDC and SAML clients), it was needed to copy the javascript engine to the distribution. This is no longer needed as Nashorn javascript engine is available in Red Hat build of Keycloak server by default. When you deploy script providers, it is recommended to not copy Nashorn's script engine and its dependencies into the Red Hat build of Keycloak distribution. 9.2. Renamed Keycloak Admin client artifacts After the upgrade to Jakarta EE, artifacts for Keycloak Admin clients were renamed to more descriptive names with consideration for long-term maintainability. However, two separate Keycloak Admin clients still exist: one with Jakarta EE and the other with Java EE support. The org.keycloak:keycloak-admin-client-jakarta artifact is no longer released. The default one for the Keycloak Admin client with Jakarta EE support is org.keycloak:keycloak-admin-client (since version 22.0.0). The new artifact with Java EE support is org.keycloak:keycloak-admin-client-jee . 9.2.1. Jakarta EE support The new artifact with Java EE support is org.keycloak:keycloak-admin-client-jee . Jakarta EE support Before migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency> After migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency> 9.2.2. Java EE support Before migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency> After migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency> 9.3. Never expires option removed from client advanced settings combos The option Never expires is now removed from all the combos of the Advanced Settings client tab. This option was misleading because the different lifespans or idle timeouts were never infinite, but limited by the general user session or realm values. Therefore, this option is removed in favor of the other two remaining options: Inherits from the realm settings (the client uses general realm timeouts) and Expires in (the value is overridden for the client). Internally the Never expires was represented by -1 . Now that value is shown with a warning in the Admin Console and cannot be set directly by the administrator. 9.4. New email rules and limits validation Red Hat build of Keycloak has new rules on email creation to allow ASCII characters during the email creation. Also, a new limit of 64 characters on exists on local email part (before the @). So, a new parameter --spi-user-profile-declarative-user-profile-max-email-local-part-length is added to set max email local part length taking backwards compatibility into consideration. The default value is 64. kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100 | [
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency>",
"kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/other-changes |
Chapter 3. The LLDB debugger | Chapter 3. The LLDB debugger The LLDB debugger is a command-line tool for debugging C and C++ programs. Use LLDB to inspect memory within the code being debugged, control the execution state of the code, and detect the execution of particular sections of code. LLVM Toolset is distributed with LLDB {comp-ver-rhel-7}. 3.1. Prerequisites LLVM Toolset is installed. For more information, see Installing LLVM Toolset . Your compiler is configured to create debug information. For instructions on configuring the Clang compiler, see Controlling Debug Information in the Clang Compiler User's Manual. For instructions on configuring the GCC compiler, see Preparing a Program for Debugging in the Red Hat Developer Toolset User Guide. 3.2. Starting a debugging session Use LLDB to start an interactive debugging session. Procedure To run LLDB on a program you want to debug, use the following command: On Red Hat Enterprise Linux 8: Replace < binary_file > with the name of your compiled program. You have started your LLDB debugging session in interactive mode. Your command-line terminal now displays the default prompt (lldb) . On Red Hat Enterprise Linux 9: Replace < binary_file > with the name of your compiled program. You have started your LLDB debugging session in interactive mode. Your command-line terminal now displays the default prompt (lldb) . To quit the debugging session and return to the shell prompt, run the following command: 3.3. Executing your program during a debugging session Use LLDB to execute your program during your debugging session. The execution of your program stops when the first breakpoint is reached, when an error occurs, or when the program terminates. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To execute the program you are debugging, run: To execute the program you are debugging using a specific argument, run: Replace < argument > with the command-line argument you want to use. 3.4. Using breakpoints Use breakpoints to pause the execution of your program at a set point in your source code. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To set a new breakpoint on a specific line, run the following command: Replace < source_file_name > with the name of your source file and < line_number > with the line number you want to set your breakpoint at. To set a breakpoint on a specific function, run the following command: Replace < function_name > with the name of the function you want to set your breakpoint at. To display a list of currently set breakpoints, run the following command: To delete a breakpoint, run: Replace < source_file_name > with the name of your source file and < line_number > with line number of the breakpoint you want to delete. To resume the execution of your program after it reached a breakpoint, run: To skip a specific number of breakpoints, run the following command: Replace < breakpoints_to_skip > with the number of breakpoints you want to skip. Note To skip a loop, set the < breakpoints_to_skip > to match the loop iteration count. 3.5. Stepping through code You can use LLDB to step through the code of your program to execute only one line of code after the line pointer. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To step through one line of code: Set your line pointer to the line you want to execute. Run the following command: To step through a specific number of lines of code: Set your line pointer to the line you want to execute. Run the following command: Replace < number > with the number of lines you want to execute. 3.6. Listing source code Before you execute the program you are debugging, the LLDB debugger automatically displays the first 10 lines of source code. Each time the execution of the program is stopped, LLDB displays the line of source code on which it stopped as well as its surrounding lines. You can use LLDB to manually trigger the display of source code during your debugging session. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To list the first 10 lines of the source code of the program you are debugging, run: To display the source code from a specific line, run: Replace < source_file_name > with the name of your source file and < line_number > with the number of the line you want to display. 3.7. Displaying current program data The LLDB debugger provides data on variables of any complexity, any valid expressions, and function call return values. You can use LLDB to display data relevant to the program state. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To display the current value of a certain variable, expression, or return value, run: Replace < data_name > with data you want to display. 3.8. Additional resources For more information on the LLDB debugger, see the official LLDB documentation LLDB Tutorial . For a list of GDB commands and their LLDB equivalents, see the GDB to LLDB Command Map . | [
"lldb < binary_file_name >",
"lldb < binary_file >",
"(lldb) quit",
"(lldb) run",
"(lldb) run < argument >",
"(lldb) breakpoint set --file < source_file_name> --line < line_number >",
"(lldb) breakpoint set --name < function_name >",
"(lldb) breakpoint list",
"(lldb) breakpoint clear -f < source_file_name > -l < line_number >",
"(lldb) continue",
"(lldb) continue -i < breakpoints_to_skip >",
"(lldb) step",
"(lldb) step -c < number >",
"(lldb) list",
"(lldb) list < source_file_name >:< line_number >",
"(lldb) print < data_name >"
] | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_18.1.8_toolset/assembly_the-lldb-debugger_using-llvm-toolset |
Chapter 11. Viewing threads | Chapter 11. Viewing threads You can view and monitor the state of threads. Procedure Click the Runtime tab and then the Threads subtab. The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order. To sort the list by increasing ID, click the ID column label. Optionally, filter the list by thread state (for example, Blocked ) or by thread name. To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_springboot_standalone/fuse-console-view-threads-all_springboot |
Chapter 6. Configuring PID limits | Chapter 6. Configuring PID limits A process identifier (PID) is a unique identifier assigned by the Linux kernel to each process or thread currently running on a system. The number of processes that can run simultaneously on a system is limited to 4,194,304 by the Linux kernel. This number might also be affected by limited access to other system resources such as memory, CPU, and disk space. In Red Hat OpenShift Service on AWS 4.11 and later, by default, a pod can have a maximum of 4,096 PIDs. If your workload requires more than that, you can increase the allowed maximum number of PIDs by configuring a KubeletConfig object. Red Hat OpenShift Service on AWS clusters running versions earlier than 4.11 use a default PID limit of 1024 . 6.1. Understanding process ID limits In Red Hat OpenShift Service on AWS, consider these two supported limits for process ID (PID) usage before you schedule work on your cluster: Maximum number of PIDs per pod. The default value is 4,096 in Red Hat OpenShift Service on AWS 4.11 and later. This value is controlled by the podPidsLimit parameter set on the node. Maximum number of PIDs per node. The default value depends on node resources . In Red Hat OpenShift Service on AWS, this value is controlled by the --system-reserved parameter, which reserves PIDs on each node based on the total resources of the node. When a pod exceeds the allowed maximum number of PIDs per pod, the pod might stop functioning correctly and might be evicted from the node. See the Kubernetes documentation for eviction signals and thresholds for more information. When a node exceeds the allowed maximum number of PIDs per node, the node can become unstable because new processes cannot have PIDs assigned. If existing processes cannot complete without creating additional processes, the entire node can become unusable and require reboot. This situation can result in data loss, depending on the processes and applications being run. Customer administrators and Red Hat Site Reliability Engineering are notified when this threshold is reached, and a Worker node is experiencing PIDPressure warning will appear in the cluster logs. 6.2. Risks of setting higher process ID limits for Red Hat OpenShift Service on AWS pods The podPidsLimit parameter for a pod controls the maximum number of processes and threads that can run simultaneously in that pod. You can increase the value for podPidsLimit from the default of 4,096 to a maximum of 16,384. Changing this value might incur downtime for applications, because changing the podPidsLimit requires rebooting the affected node. If you are running a large number of pods per node, and you have a high podPidsLimit value on your nodes, you risk exceeding the PID maximum for the node. To find the maximum number of pods that you can run simultaneously on a single node without exceeding the PID maximum for the node, divide 3,650,000 by your podPidsLimit value. For example, if your podPidsLimit value is 16,384, and you expect the pods to use close to that number of process IDs, you can safely run 222 pods on a single node. Note Memory, CPU, and available storage can also limit the maximum number of pods that can run simultaneously, even when the podPidsLimit value is set appropriately. For more information, see "Planning your environment" and "Limits and scalability". Additional resources Instance types Planning your environment Limits and scalability 6.3. Setting a higher process ID limit on an existing Red Hat OpenShift Service on AWS cluster You can set a higher podPidsLimit on an existing Red Hat OpenShift Service on AWS (ROSA) cluster by creating or editing a KubeletConfig object that changes the --pod-pids-limit parameter. Important Changing the podPidsLimit on an existing cluster will trigger non-control plane nodes in the cluster to reboot one at a time. Make this change outside of peak usage hours for your cluster and avoid upgrading or hibernating your cluster until all nodes have rebooted. Prerequisites You have a Red Hat OpenShift Service on AWS cluster. You have installed the ROSA CLI ( rosa ). You have installed the OpenShift CLI ( oc ). You have logged in to your Red Hat account by using the ROSA CLI. Procedure Create or edit the KubeletConfig object to change the PID limit. If this is the first time you are changing the default PID limit, create the KubeletConfig object and set the --pod-pids-limit value by running the following command: USD rosa create kubeletconfig -c <cluster_name> --name <kubeletconfig_name> --pod-pids-limit=<value> Note The --name parameter is optional on ROSA Classic clusters, because only one KubeletConfig object is supported per ROSA Classic cluster. For example, the following command sets a maximum of 16,384 PIDs per pod for cluster my-cluster : USD rosa create kubeletconfig -c my-cluster --name set-high-pids --pod-pids-limit=16384 If you previously created a KubeletConfig object, edit the existing KubeletConfig object and set the --pod-pids-limit value by running the following command: USD rosa edit kubeletconfig -c <cluster_name> --name <kubeletconfig_name> --pod-pids-limit=<value> A cluster-wide rolling reboot of worker nodes is triggered. Verify that all of the worker nodes rebooted by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... True False False 4 4 4 0 4h42m Verification When each node in the cluster has rebooted, you can verify that the new setting is in place. Check the Pod Pids limit in the KubeletConfig object: USD rosa describe kubeletconfig --cluster=<cluster_name> The new PIDs limit appears in the output, as shown in the following example: Example output Pod Pids Limit: 16384 6.4. Removing custom configuration from a cluster You can remove custom configuration from your cluster by removing the KubeletConfig object that contains the configuration details. Prerequisites You have an existing Red Hat OpenShift Service on AWS cluster. You have installed the ROSA CLI (rosa). You have logged in to your Red Hat account by using the ROSA CLI. Procedure Remove custom configuration from the cluster by deleting the relevant custom KubeletConfig object: USD rosa delete kubeletconfig --cluster <cluster_name> --name <kubeletconfig_name> Verification steps Confirm that the custom KubeletConfig object is not listed for the cluster: USD rosa describe kubeletconfig --name <cluster_name> | [
"rosa create kubeletconfig -c <cluster_name> --name <kubeletconfig_name> --pod-pids-limit=<value>",
"rosa create kubeletconfig -c my-cluster --name set-high-pids --pod-pids-limit=16384",
"rosa edit kubeletconfig -c <cluster_name> --name <kubeletconfig_name> --pod-pids-limit=<value>",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... True False False 4 4 4 0 4h42m",
"rosa describe kubeletconfig --cluster=<cluster_name>",
"Pod Pids Limit: 16384",
"rosa delete kubeletconfig --cluster <cluster_name> --name <kubeletconfig_name>",
"rosa describe kubeletconfig --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cluster_administration/rosa-configuring-pid-limits |
Chapter 27. Deployment and Tools | Chapter 27. Deployment and Tools systemd component, BZ# 978955 When attempting to start, stop, or restart a service or unit using the systemctl [start|stop|restart] NAME command, no message is displayed to inform the user whether the action has been successful. systemd component, BZ#968401 The /etc/rc.d/rc.local file does not have executable permissions in Red Hat Enterprise Linux 7. If commands are added to the /etc/rc.d/rc.local file, the file has to be made executable afterwards. By default, /etc/rc.d/rc.local does not have executable permissions because if these permissions are detected, the system has to wait until the network is "up" before the boot process can be finished. flightrecorder component, BZ#1049701 The flightrecorder package is currently not included in Red Hat Enterprise Linux 7. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/known-issues-deployment_and_tools |
1.4.2. Additional Differences Between GFS and GFS2 | 1.4.2. Additional Differences Between GFS and GFS2 This section summarizes the additional differences in GFS and GFS2 administration that are not described in Section 1.4.1, "GFS2 Command Names" . Context-Dependent Path Names GFS2 file systems do not provide support for context-dependent path names, which allow you to create symbolic links that point to variable destination files or directories. For this functionality in GFS2, you can use the bind option of the mount command. For information on bind mounts and context-dependent pathnames in GFS2, see Section 4.12, "Bind Mounts and Context-Dependent Path Names" . gfs2.ko Module The kernel module that implements the GFS file system is gfs.ko . The kernel module that implements the GFS2 file system is gfs2.ko . Enabling Quota Enforcement in GFS2 In GFS2 file systems, quota enforcement is disabled by default and must be explicitly enabled. For information on enabling and disabling quota enforcement, see Section 4.5, "GFS2 Quota Management" . Data Journaling GFS2 file systems support the use of the chattr command to set and clear the j flag on a file or directory. Setting the +j flag on a file enables data journaling on that file. Setting the +j flag on a directory means "inherit jdata", which indicates that all files and directories subsequently created in that directory are journaled. Using the chattr command is the preferred way to enable and disable data journaling on a file. Adding Journals Dynamically In GFS file systems, journals are embedded metadata that exists outside of the file system, making it necessary to extend the size of the logical volume that contains the file system before adding journals. In GFS2 file systems, journals are plain (though hidden) files. This means that for GFS2 file systems, journals can be dynamically added as additional servers mount a file system, as long as space remains on the file system for the additional journals. For information on adding journals to a GFS2 file system, see Section 4.7, "Adding Journals to a File System" . atime_quantum parameter removed The GFS2 file system does not support the atime_quantum tunable parameter, which can be used by the GFS file system to specify how often atime updates occur. In its place GFS2 supports the relatime and noatime mount options. The relatime mount option is recommended to achieve similar behavior to setting the atime_quantum parameter in GFS. The data= option of the mount command When mounting GFS2 file systems, you can specify the data=ordered or data=writeback option of the mount . When data=ordered is set, the user data modified by a transaction is flushed to the disk before the transaction is committed to disk. This should prevent the user from seeing uninitialized blocks in a file after a crash. When data=writeback is set, the user data is written to the disk at any time after it is dirtied. This does not provide the same consistency guarantee as ordered mode, but it should be slightly faster for some workloads. The default is ordered mode. The gfs2_tool command The gfs2_tool command supports a different set of options for GFS2 than the gfs_tool command supports for GFS: The gfs2_tool command supports a journals parameter that prints out information about the currently configured journals, including how many journals the file system contains. The gfs2_tool command does not support the counters flag, which the gfs_tool command uses to display GFS statistics. The gfs2_tool command does not support the inherit_jdata flag. To flag a directory as "inherit jdata", you can set the jdata flag on the directory or you can use the chattr command to set the +j flag on the directory. Using the chattr command is the preferred way to enable and disable data journaling on a file. Note As of the Red Hat Enterprise Linux 6.2 release, GFS2 supports the tunegfs2 command, which replaces some of the features of the gfs2_tool command. For further information, refer to the tunegfs2 (8) man page. The settune and gettune functions of the gfs2_tool command have been replaced by command line options of the mount command, which allows them to be set by means of the fstab file when required. The gfs2_edit command The gfs2_edit command supports a different set of options for GFS2 than the gfs_edit command supports for GFS. For information on the specific options each version of the command supports, see the gfs2_edit and gfs_edit man pages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/additional-diffs-gfs2 |
14.4. Configuration Files for the Tomcat Engine and Web Services | 14.4. Configuration Files for the Tomcat Engine and Web Services All of the user and administrative (administrators, agents, and auditors) services for the subsystems are accessed over web protocols. This section discusses the two major sets of configuration files that apply to all Red Hat Certificate System subsystems (CA, KRA, OCSP, TKS, and TPS): /var/lib/pki/ instance_name /conf/server.xml provides the configuration for the Tomcat engine. /usr/share/pki/ subsystem_type /webapps/WEB-INF/web.xml provides the configuration for the web services offered by this instance. 14.4.1. Tomcatjss Note The later subsections include important configuration information on required changes to parameter values. Ensure they are followed for strict compliance. The following configuration in the server.xml file found in the example pki-tomcat/conf directory can be used to explain how Tomcatjss fits into the entire Certificate System ecosystem. Portions of the Connector entry for the secret port are shown below. In the server.xml configuration file for the Tomcat engine, there is this Connector configuration element that contains the pointer to the tomcatjss implementation, which can be plugged into the sslImplementation property of this Connector object. Each key parameter element is explained in the subsections below. 14.4.1.1. TLS Cipher Configuration The TLS ciphers configured in the server.xml file provide system-wide defaults when Red Hat Certificate system is acting as a client and as a server. This includes when acting as a server (for example, when serving HTTPS connections from Tomcat) and as a client (for example, when communicating with the LDAP server or when communicating with another Certificate System instance). The configuration for server TLS ciphers is in the Red Hat Certificate System instance-specific /var/lib/pki/ instance_name /conf/server.xml file. The following parameters control the ciphers offered: strictCiphers , when set to true , ensures that only ciphers with a + sign in the sslRangeCiphers are enabled. Do not change the default value ( true ). sslVersionRangeStream and sslVersionRangeDatagram sets the TLS version the server supports. The following are the defaults of the parameters: Do not change the default value of the parameters. sslRangeCiphers sets which ciphers are enabled and disabled. Ciphers with a + sign are enabled, ciphers with a - sign disabled. Set RSA ciphers as below: Set EC ciphers as below: For a list of allowed ciphers, see Section 3.1, "TLS, ECC, and RSA" . If you install Certificate System with either LunaSA or nCipher Hardware Security Module (HSM) on systems with FIPS mode enabled for RSA, disable the following ciphers, as they are unsupported on HSMs in FIPS mode: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 14.4.1.1.1. Client TLS cipher Configuration The Red Hat Certificate System also allows for cipher configuration on a system when it is acting as a client to another CS system. Ciphers in a list are separated by commas. On CA (for the communication of the CA with the KRA): For example: On TPS (for communication with the CA, the KRA and the TKS): For example: 14.4.1.2. Enabling Automatic Revocation Checking on the CA The CA can be configured to check the revocation status of any certificate (including agent, administrator, and enrollment) the server receives during authentication of an SSL/TLS client. This means that when the CA receives any client authentication request, it automatically checks an OCSP. (For information about setting up an OCSP responder, see Using the Online Certificate Status Protocol (OCSP) Responder in the Red Hat Certificate System Administration Guide .) As part of revocation checking, the CA has the ability to cache client authentication so that it keeps a list of verified certificates. This allows the CA to check its cached results before checking its internal database or an OCSP, which improves the overall operation performance. Automatic revocation checking is enabled in the revocationChecking.enabled parameter. The revocation status results are only valid for a certain, specified period of time ( revocationChecking.validityInterval ). If the CA has no way to re-verify a certificate status that is in cache, then there is a grace interval ( revocationChecking.unknownStateInterval ) where the previously-cached status is still considered valid, even if it is outside the validity interval. Note The cached certificates are kept in a buffer ( revocationChecking.bufferSize ). If the buffer setting is missing or is set to zero, then no buffer is kept, which means that the results of revocation checks are not cached. In that case, all revocation checks are performed directly against the CA's internal database. Note The subsystem CS.cfg configuration file includes a parameter, jss.ocspcheck.enable , which sets whether a Certificate Manager should use an OCSP to verify the revocation status of the certificate it receives as a part of SSL/TLS client or server authentication. Changing the value of this parameter to true means the Certificate Manager reads the Authority Information Access extension in the certificate and verifies the revocation status of the certificate from the OCSP responder specified in the extension. Stop the subsystem instance. Open the CS.cfg file. Edit the revocation-related parameters. revocationChecking.ca . Sets which service is providing the OCSP respsonse, a CA or an OCSP responder. revocationChecking.enabled . Sets revocation checking. true enables checking; false disables checking. By default, the feature is enabled. revocationChecking.bufferSize . Sets the total number of last-checked certificates the server should maintain in its cache. For example, if the buffer size is 2, the server retains the last two certificates checked in its cache. By default, the server caches the last 50 certificates. revocationChecking.unknownStateInterval . Sets how frequently the server checks the revocation status. The default interval is 0 seconds. unknownStateInterval is a grace period in which cache result will be assumed to be true if CA has no means (no access to information allowing) to verify certificate status revocationChecking.validityInterval . Sets how long the cached certificates are considered valid. Be judicious when choosing the interval. For example, if the validity period is 60 seconds, the server discards the certificates in its cache every minute and attempts to retrieve them from their source. The Certificate Manager uses its internal database to retrieve and verify the revocation status of the certificates. The default validity period is 120 seconds (2 minutes). Start the Certificate System instance. 14.4.1.3. Enabling Certificate Revocation Checking for Subsystems The Certificate System subsystems do not have OCSP checking enabled, by default, to validate subsystem certificates. This means it is possible for someone to log into the administrative or agent interfaces with a revoked certificate. OCSP checking can be enabled for all subsystems by editing the server.xml file. The agent interface and the admin interface are configured separately, so both sections in the configuration should be edited. Note If the subsystem has been configured to use an SSL/TLS connection with its internal database, then the SSL/TLS server certificate of the LDAP internal database must be recognized by the OCSP responder. If the OCSP responder does not recognize the LDAP server certificate, then the subsystem will not start properly. This configuration is covered in the Red Hat Certificate System 10 Planning, Installation, and Deployment Guide , since subsystem-LDAP SSL/TLS server connections are configured as part of the subsystem setup. Get the name of the OCSP signing certificate for the OCSP or CA which will be used to check certificate status. For example: Open the server.xml file for the subsystem. For example: If the OCSP signing certificate is not present in the instance's security database, import it: There are three critical parameters to enable OCSP checking: enableOCSP , which must be set to true to enable OCSP checking. This is a global setting; if it is set for one interface, then it applies to every interface for the instance. However, it must be set on the first interface listed in the server.xml file, which is usually the agent interface. Any setting on another interface is ignored. ocspResponderURL , which gives the URL of the OCSP responder to send the OCSP requests. For an OCSP Manager, this can be another OCSP service in another OCSP or in a CA. For other subsystems, this always points to an external OCSP service in an OCSP or a CA. ocspResponderCertNickname , which gives the signing certificate to use to sign the response; for a CA OCSP service, this is the CA's OCSP signing certificate, and for an OCSP responder, it is an OCSP signing certificate. Other parameters can be used to define the OCSP communication. All of the OCSP checking parameters are listed in Table 14.10, "OCSP Parameters for server.xml" . There are two different sections in the file for the agent and administrator interfaces. The OCSP parameters need to be added to both sections to enable and configure OCSP checking. For example: Example 14.3. OCSP Settings for an Agent Interface If the given OCSP service is not the CA, then the OCSP service's signing certificate must be imported into the subsystem's NSS database. This can be done in the console or using certutil ; both options are covered in Installing Certificates in the Certificate System Database in the Red Hat Certificate System Administration Guide . Restart the subsystem. Table 14.10. OCSP Parameters for server.xml Parameter Description enableOCSP Enables (or disables) OCSP checking for the subsystem. ocspResponderURL Sets the URL where the OCSP requests are sent. For an OCSP Manager, this can be another OCSP service in another OCSP or in a CA. For a TKS or KRA, this always points to an external OCSP service in an OCSP or a CA. ocspResponderCertNickname Sets the nickname of the signing certificate for the responder, either the OCSP signing certificate or the CA's OCSP signing certificate. The certificate must be imported into the subsystem's NSS database and have the appropriate trust settings set. ocspCacheSize Sets the maximum number of cache entries. ocspMinCacheEntryDuration Sets minimum seconds before another fetch attempt can be made. For example, if this is set to 120, then the validity of a certificate cannot be checked again until at least 2 minutes after the last validity check. ocspMaxCacheEntryDuration Sets the maximum number of seconds to wait before making the fetch attempt. This prevents having too large a window between validity checks. ocspTimeout Sets the timeout period, in seconds, for the OCSP request. 14.4.1.4. Adding an AIA Extension to an Enrollment Profile To set the AIA URL in the profile when using an external OCSP, add the correct URL to the certificate profile. For example: 14.4.2. Session Timeout When a user connects to PKI server through a client application, the server will create a session to keep track of the user. As long as the user remains active, the user can execute multiple operations over the same session without having to re-authenticate. Session timeout determines how long the server will wait since the last operation before terminating the session due to inactivity. Once the session is terminated, the user will be required to re-authenticate to continue accessing the server, and the server will create a new session. There are two types of timeouts: TLS session timeout HTTP session timeout Due to differences in the way clients work, the clients will be affected differently by these timeouts. Note Certain clients have their own timeout configuration. For example, Firefox has a keep-alive timeout setting. For details, see http://kb.mozillazine.org/Network.http.keep-alive.timeout . If the value is different from the server's setting for TLS Session Timeout or HTTP Session Timeout, different behavior can be observed. 14.4.2.1. TLS Session Timeout A TLS session is a secure communication channel over a TLS connection established through TLS handshake protocol. PKI server generates audit events for TLS session activities. The server generates an ACCESS_SESSION_ESTABLISH audit event with Outcome=Success when the connection is created. If the connection fails to be created, the server will generate an ACCESS_SESSION_ESTABLISH audit event with Outcome=Failure . When the connection is closed, the server will generate an ACCESS_SESSION_TERMINATED audit event. TLS session timeout (that is TLS connection timeout) is configured in the keepAliveTimeout parameter in the Secure <Connector> element in the /etc/pki/<instance>/server.xml file: By default the timeout value is set to 300000 milliseconds (that is 5 minutes). To change this value, edit the /etc/pki/<instance>/server.xml file and then restart the server. Note Note that this value will affect all TLS connections to the server. A large value may improve the efficiency of the clients since they can reuse existing connections that have not expired. However, it may also increase the number of connections that the server has to support simultaneously since it takes longer for abandoned connections to expire. 14.4.2.2. HTTP Session Timeout An HTTP session is a mechanism to track a user across multiple HTTP requests using HTTP cookies. PKI server does not generate audit events for the HTTP sessions. Note For the purpose of auditing consistency, set the <session-timeout> value in this section to match the keepAliveTimeout value in Section 14.4.2.1, "TLS Session Timeout" . For example if keepAliveTimeout was set to 300000 (5 minutes), then set <session-timeout> to 30 . The HTTP session timeout can be configured in the <session-timeout> element in the /etc/pki/<instance>/web.xml file: By default the timeout value is set to 30 minutes. To change the value, edit the /etc/pki/<instance>/web.xml file and then restart the server. Note Note that this value affects all sessions in all web applications on the server. A large value may improve the experience of the users since they will not be required to re-authenticate or view the access banner again so often. However, it may also increase the security risk since it takes longer for abandoned HTTP sessions to expire. 14.4.2.3. Session Timeout for PKI Web UI PKI Web UI is an interactive web-based client that runs in a browser. Currently it only supports client certificate authentication. When the Web UI is opened, the browser may create multiple TLS connections to a server. These connections are associated to a single HTTP session. To configure a timeout for the Web UI, see Section 14.4.2.2, "HTTP Session Timeout" . The TLS session timeout is normally irrelevant since the browser caches the client certificate so it can recreate the TLS session automatically. When the HTTP session expires, the Web UI does not provide any immediate indication. However, the Web UI will display an access banner (if enabled) before a user executes an operation. 14.4.2.4. Session Timeout for PKI Console PKI Console is an interactive standalone graphical UI client. It supports username/password and client certificate authentication. When the console is started, it will create a single TLS connection to the server. The console will display an access banner (if enabled) before opening the graphical interface. Unlike the Web UI, the console does not maintain an HTTP session with the server. To configure a timeout for the console, see Section 14.4.2.1, "TLS Session Timeout" . The HTTP session timeout is irrelevant since the console does not use HTTP session. When the TLS session expires, the TLS connection will close, and the console will exit immediately to the system. If the user wants to continue, the user will need to restart the console. 14.4.2.5. Session Timeout for PKI CLI PKI CLI is a command-line client that executes a series of operations. It supports username/password and client certificate authentication. When the CLI is started, it will create a single TLS connection to the server and an HTTP session. The CLI will display an access banner (if enabled) before executing operations. Both timeouts are generally irrelevant to PKI CLI since the operations are executed in sequence without delay and the CLI exits immediately upon completion. However, if the CLI waits for user inputs, is slow, or becomes unresponsive, the TLS session or the HTTP session may expire and the remaining operations fail. If such delay is expected, see Section 14.4.2.1, "TLS Session Timeout" and Section 14.4.2.2, "HTTP Session Timeout" to accommodate the expected delay. | [
"<Connector name=\"Secure\" Info about the socket itself port=\"8443\" protocol=\"org.apache.coyote.http11.Http11Protocol\" SSLEnabled=\"true\" sslProtocol=\"SSL\" scheme=\"https\" secure=\"true\" connectionTimeout=\"80000\" maxHttpHeaderSize=\"8192\" acceptCount=\"100\" maxThreads=\"150\" minSpareThreads=\"25\" enableLookups=\"false\" disableUploadTimeout=\"true\" Points to our tomcat jss implementation sslImplementationName=\"org.apache.tomcat.util.net.jss.JSSImplementation\" OCSP responder configuration can be enabled here enableOCSP=\"true\" ocspCacheSize=\"1000\" ocspMinCacheEntryDuration=\"60\" ocspMaxCacheEntryDuration=\"120\" ocspTimeout=\"10\" A collection of cipher related settings that make sure connections are secure. strictCiphers=\"true\" The \"clientAuth\" parameter configures the client authentication scheme for this server socket. If you set \"clientAuth\" to \"want\", the client authentication certificate is optional. Alternatively, set the parameter to \"required\" to configure that the certificate is is mandatory. clientAuth=\"want\" sslVersionRangeStream=\"tls1_1:tls1_2\" sslVersionRangeDatagram=\"tls1_1:tls1_2\" sslRangeCiphers=\"+TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,+TLS_DHE_RSA_WITH_AES_128_CBC_SHA,+TLS_DHE_RSA_WITH_AES_256_CBC_SHA,+TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,+TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,+TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,+TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,+TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,+TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,+TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 serverCertNickFile=\"/var/lib/pki/pki-tomcat/conf/serverCertNick.conf\" passwordFile=\"/var/lib/pki/pki-tomcat/conf/password.conf\" passwordClass=\"org.apache.tomcat.util.net.jss.PlainPasswordFile\" certdbDir=\"/var/lib/pki/pki-tomcat/alias\" />",
"strictCiphers=\"true\"",
"sslVersionRangeStream=\"tls1_1:tls1_2\" sslVersionRangeDatagram=\"tls1_1:tls1_2\"",
"sslRangeCiphers=\"+TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,+TLS_DHE_RSA_WITH_AES_128_CBC_SHA,+TLS_DHE_RSA_WITH_AES_256_CBC_SHA,+TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,+TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,+TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,+TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,+TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,+TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,+TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"sslRangeCiphers=\"+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,+TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"ca.connector.KRA.clientCiphers= your selected cipher list",
"ca.connector.KRA.clientCiphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA",
"tps.connector. ca id .clientCiphers= your selected cipher list tps.connector. kra id .clientCiphers= your selected cipher list tps.connector. tks id .clientCiphers= your selected cipher list",
"tps.connector.ca1.clientCiphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA",
"pki-server stop instance_name",
"vim /var/lib/pki/ instance-name /ca/conf/CS.cfg",
"auths.revocationChecking.bufferSize=50 auths.revocationChecking.ca=ca auths.revocationChecking.enabled=true auths.revocationChecking.unknownStateInterval=0 auths.revocationChecking.validityInterval=120",
"pki-server start instance_name",
"certutil -L -d /etc/pki/ instance-name /alias Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Certificate Authority - Example Domain CT,c, ocspSigningCert cert-pki-ocsp CTu,Cu,Cu subsystemCert cert-pki-ocsp u,u,u Server-Cert cert-pki-ocsp u,u,u auditSigningCert cert-pki-ocsp u,u,Pu",
"vim /etc/pki/ instance-name /server.xml",
"certutil -d /etc/pki/ instance-name /alias -A -n \"ocspSigningCert cert-pki-ca\" -t \"C,,\" -a -i ocspCert.b64",
"<Connector name=\"Agent\" port=\"8443\" maxHttpHeaderSize=\"8192\" maxThreads=\"150\" minSpareThreads=\"25\" maxSpareThreads=\"75\" enableLookups=\"false\" disableUploadTimeout=\"true\" acceptCount=\"100\" scheme=\"https\" secure=\"true\" clientAuth=\"true\" sslProtocol=\"SSL\" sslOptions=\"ssl2=true,ssl3=true,tls=true\" ssl3Ciphers=\"-SSL3_FORTEZZA_DMS_WITH_NULL_SHA, ...\" tls3Ciphers=\"-SSL3_FORTEZZA_DMS_WITH_NULL_SHA, ...\" SSLImplementation=\"org.apache.tomcat.util.net.jss.JSSImplementation\" enableOCSP=\"true\" ocspResponderURL=\"http://server.example.com:8443/ca/ocsp\" ocspResponderCertNickname=\"ocspSigningCert cert-pki-ca 102409a\" ocspCacheSize=\"1000\" ocspMinCacheEntryDuration=\"60\" ocspMaxCacheEntryDuration=\"120\" ocspTimeout=\"10\" debug=\"true\" serverCertNickFile=\"/etc/pki/ instance-name /serverCertNick.conf\" passwordFile=\"/etc/pki/ instance-name /password.conf\" passwordClass=\"org.apache.tomcat.util.net.jss.PlainPasswordFile\" certdbDir=\"/etc/pki/ instance-name /alias\"/>",
"pki-server restart instance_name",
"policyset.cmcUserCertSet.5.default.params.authInfoAccessADLocation_0= http://example.com:8080 /ocsp/ee/ocsp",
"<Server> <Service> <Connector name=\"Secure\" keepAliveTimeout=\"300000\" /> </Service> </Server>",
"<web-app> <session-config> <session-timeout>30</session-timeout> </session-config> </web-app>"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/web-services-configuration-files |
About OpenShift Pipelines | About OpenShift Pipelines Red Hat OpenShift Pipelines 1.18 Introduction to OpenShift Pipelines Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/about_openshift_pipelines/index |
Chapter 1. Troubleshooting | Chapter 1. Troubleshooting Before using the Troubleshooting guide, you can run the oc adm must-gather command to gather details, logs, and take steps in debugging issues. For more details, see Running the must-gather command to troubleshoot . Additionally, check your role-based access. See Role-based access control for details. 1.1. Documented troubleshooting View the list of troubleshooting topics for Red Hat Advanced Cluster Management for Kubernetes: Installation To view the main documentation for the installing tasks, see Installing and upgrading . Troubleshooting installation status stuck in installing or pending Troubleshooting reinstallation failure Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade Backup and restore To view the main documentation for backup and restore, see Backup and restore . Troubleshooting restore status finishes with errors Cluster management To view the main documentation about managing your clusters, see The multicluster engine operator cluster lifecycle overview . Troubleshooting an offline cluster Troubleshooting a managed cluster import failure Troubleshooting cluster with pending import status Troubleshooting imported clusters offline after certificate change Troubleshooting cluster status changing from offline to available Troubleshooting cluster creation on VMware vSphere Troubleshooting cluster in console with pending or failed status Troubleshooting OpenShift Container Platform version 3.11 cluster import failure Troubleshooting Klusterlet with degraded conditions Troubleshooting Object storage channel secret Namespace remains after deleting a cluster Auto-import-secret-exists error when importing a cluster Troubleshooting the cinder Container Storage Interface (CSI) driver for VolSync Troubleshooting cluster curator automatic template failure to deploy multicluster global hub To view the main documentation about the multicluster global hub, see multicluster global hub . Troubleshooting with the must-gather command Troubleshooting by accessing the PostgreSQL database Troubleshooting by using the database dump and restore Application management To view the main documentation about application management, see Managing applications . Troubleshooting application Kubernetes deployment version Troubleshooting local cluster not selected Governance Troubleshooting multiline YAML parsing To view the security guide, see Security overview . Console observability Console observability includes Search, along with header and navigation function. To view the observability guide, see Observability in the console . Troubleshooting grafana Troubleshooting observability Troubleshooting OpenShift monitoring services Troubleshooting metrics-collector Troubleshooting PostgreSQL shared memory error Troubleshooting a block error for Thanos compactor Submariner networking and service discovery This section lists the Submariner troubleshooting procedures that can occur when using Submariner with Red Hat Advanced Cluster Management or multicluster engine operator. For general Submariner troubleshooting information, see Troubleshooting in the Submariner documentation. To view the main documentation for the Submariner networking service and service discovery, see Submariner multicluster networking and service discovery . Troubleshooting Submariner not connecting after installation - general information Troubleshooting Submariner add-on status is degraded 1.2. Running the must-gather command to troubleshoot To get started with troubleshooting, learn about the troubleshooting scenarios for users to run the must-gather command to debug the issues, then see the procedures to start using the command. Required access: Cluster administrator 1.2.1. Must-gather scenarios Scenario one: Use the Documented troubleshooting section to see if a solution to your problem is documented. The guide is organized by the major functions of the product. With this scenario, you check the guide to see if your solution is in the documentation. For instance, for trouble with creating a cluster, you might find a solution in the Manage cluster section. Scenario two: If your problem is not documented with steps to resolve, run the must-gather command and use the output to debug the issue. Scenario three: If you cannot debug the issue using your output from the must-gather command, then share your output with Red Hat Support. 1.2.2. Must-gather procedure See the following procedure to start using the must-gather command: Learn about the must-gather command and install the prerequisites that you need at Gathering data about your cluster in the Red Hat OpenShift Container Platform documentation. Log in to your cluster. Add the Red Hat Advanced Cluster Management for Kubernetes image that is used for gathering data and the directory. Run the following command, where you insert the image and the directory for the output: For the usual use-case, you should run the must-gather while you are logged into your hub cluster. Note: If you want to check your managed clusters, find the gather-managed.log file that is located in the cluster-scoped-resources directory: Check for managed clusters that are not set True for the JOINED and AVAILABLE column. You can run the must-gather command on those clusters that are not connected with True status. Go to your specified directory to see your output, which is organized in the following levels: Two peer levels: cluster-scoped-resources and namespace resources. Sub-level for each: API group for the custom resource definitions for both cluster-scope and namespace-scoped resources. level for each: YAML file sorted by kind . 1.2.3. Must-gather in a disconnected environment Complete the following steps to run the must-gather command in a disconnected environment: In a disconnected environment, mirror the Red Hat operator catalog images into their mirror registry. For more information, see Install in disconnected network environments . Run the following commands to collect all of the information, replacing <2.x> with the supported version for both <acm-must-gather> , for example 2.10 , and <multicluster-engine/must-gather> , for example 2.5 . If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot further, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your Red Hat credentials. 1.2.4. Must-gather for a hosted cluster If you experience issues with hosted control plane clusters, you can run the must-gather command to gather information to help you with troubleshooting. 1.2.4.1. About the must-gather command for hosted clusters The command generates output for the managed cluster and the hosted cluster. Data from the multicluster engine operator hub cluster: Cluster-scoped resources: These resources are node definitions of the management cluster. The hypershift-dump compressed file: This file is useful if you need to share the content with other people. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Network logs: These logs include the OVN northbound and southbound databases and the status for each one. Hosted clusters: This level of output involves all of the resources inside of the hosted cluster. Data from the hosted cluster: Cluster-scoped resources: These resources include all of the cluster-wide objects, such as nodes and CRDs. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Although the output does not contain any secret objects from the cluster, it can contain references to the names of the secrets. 1.2.4.2. Prerequisites To gather information by running the must-gather command, you must meet the following prerequisites: You must ensure that the kubeconfig file is loaded and is pointing to the multicluster engine operator hub cluster. You must have cluster-admin access to the multicluster engine operator hub cluster. You must have the name value for the HostedCluster resource and the namespace where the custom resource is deployed. 1.2.4.3. Entering the must-gather command for hosted clusters Enter the following command to collect information about the hosted cluster. In the command, the hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE parameter is optional; if you do not include it, the command runs as though the hosted cluster is in the default namespace, which is clusters . To save the results of the command to a compressed file, include the --dest-dir=NAME parameter, replacing NAME with the name of the directory where you want to save the results: 1.2.4.4. Entering the must-gather command in a disconnected environment Complete the following steps to run the must-gather command in a disconnected environment: In a disconnected environment, mirror the Red Hat operator catalog images into their mirror registry. For more information, see Install in disconnected network environments . Run the following command to extract logs, which reference the image from their mirror registry: 1.2.4.5. Additional resources For more information about troubleshooting hosted control planes, see Troubleshooting hosted control planes in the OpenShift Container Platform documentation. 1.3. Troubleshooting installation status stuck in installing or pending When installing Red Hat Advanced Cluster Management, the MultiClusterHub remains in Installing phase, or multiple pods maintain a Pending status. 1.3.1. Symptom: Stuck in Pending status More than ten minutes passed since you installed MultiClusterHub and one or more components from the status.components field of the MultiClusterHub resource report ProgressDeadlineExceeded . Resource constraints on the cluster might be the issue. Check the pods in the namespace where Multiclusterhub was installed. You might see Pending with a status similar to the following: In this case, the worker nodes resources are not sufficient in the cluster to run the product. 1.3.2. Resolving the problem: Adjust worker node sizing If you have this problem, then your cluster needs to be updated with either larger or more worker nodes. See Sizing your cluster for guidelines on sizing your cluster. 1.4. Troubleshooting reinstallation failure When reinstalling Red Hat Advanced Cluster Management for Kubernetes, the pods do not start. 1.4.1. Symptom: Reinstallation failure If your pods do not start after you install Red Hat Advanced Cluster Management, it is likely that Red Hat Advanced Cluster Management was previously installed, and not all of the pieces were removed before you attempted this installation. In this case, the pods do not start after completing the installation process. 1.4.2. Resolving the problem: Reinstallation failure If you have this problem, complete the following steps: Run the uninstallation process to remove the current components by following the steps in Uninstalling . Install the Helm CLI binary version 3.2.0, or later, by following the instructions at Installing Helm . Ensure that your Red Hat OpenShift Container Platform CLI is configured to run oc commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure the oc commands. Copy the following script into a file: Replace <namespace> in the script with the name of the namespace where Red Hat Advanced Cluster Management was installed. Ensure that you specify the correct namespace, as the namespace is cleaned out and deleted. Run the script to remove the artifacts from the installation. Run the installation. See Installing while connected online . 1.5. Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade After you upgrade from 2.7.x to 2.8.x and then to 2.9.0, the ocm-controller of the multicluster-engine namespace crashes. 1.5.1. Symptom: Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade After you attempt to list ManagedClusterSet and ManagedClusterSetBinding custom resource definitions, the following error message appears: Error from server: request to convert CR from an invalid group/version: cluster.open-cluster-management.io/v1beta1 The message indicates that the migration of ManagedClusterSets and ManagedClusterSetBindings custom resource definitions from v1beta1 to v1beta2 failed. 1.5.2. Resolving the problem: Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade To resolve this error, you must initiate the API migration manually. Completet the following steps: Revert the cluster-manager to a release. Pause the multiclusterengine with the following command: oc annotate mce multiclusterengine pause=true Run the following commands to replace the image of the cluster-manager deployment with a version: oc patch deployment cluster-manager -n multicluster-engine -p \ '{"spec":{"template":{"spec":{"containers":[{"name":"registration-operator","image":"registry.redhat.io/multicluster-engine/registration-operator-rhel8@sha256:35999c3a1022d908b6fe30aa9b85878e666392dbbd685e9f3edcb83e3336d19f"}]}}}}' export ORIGIN_REGISTRATION_IMAGE=USD(oc get clustermanager cluster-manager -o jsonpath='{.spec.registrationImagePullSpec}') Replace the registration image reference in the ClusterManager resource with a version. Run the following command: oc patch clustermanager cluster-manager --type='json' -p='[{"op": "replace", "path": "/spec/registrationImagePullSpec", "value": "registry.redhat.io/multicluster-engine/registration-rhel8@sha256:a3c22aa4326859d75986bf24322068f0aff2103cccc06e1001faaf79b9390515"}]' Run the following commands to revert the ManagedClusterSets and ManagedClusterSetBindings custom resource definitions to a release: oc annotate crds managedclustersets.cluster.open-cluster-management.io operator.open-cluster-management.io/version- oc annotate crds managedclustersetbindings.cluster.open-cluster-management.io operator.open-cluster-management.io/version- Restart the cluster-manager and wait for the custom resource definitions to be recreated. Run the following commands: oc -n multicluster-engine delete pods -l app=cluster-manager oc wait crds managedclustersets.cluster.open-cluster-management.io --for=jsonpath="{.metadata.annotations['operator\.open-cluster-management\.io/version']}"="2.3.3" --timeout=120s oc wait crds managedclustersetbindings.cluster.open-cluster-management.io --for=jsonpath="{.metadata.annotations['operator\.open-cluster-management\.io/version']}"="2.3.3" --timeout=120s Start the storage version migration with the following commands: oc patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' -p='[{"op":"replace", "path":"/spec/resource/version", "value":"v1beta1"}]' oc patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' --subresource status -p='[{"op":"remove", "path":"/status/conditions"}]' oc patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' -p='[{"op":"replace", "path":"/spec/resource/version", "value":"v1beta1"}]' oc patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' --subresource status -p='[{"op":"remove", "path":"/status/conditions"}]' Run the following command to wait for the migration to complete: oc wait storageversionmigration managedclustersets.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s oc wait storageversionmigration managedclustersetbindings.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s Restore the cluster-manager back to Red Hat Advanced Cluster Management 2.11. It might take several minutes. Run the following command: oc annotate mce multiclusterengine pause- oc patch clustermanager cluster-manager --type='json' -p='[{"op": "replace", "path": "/spec/registrationImagePullSpec", "value": "'USDORIGIN_REGISTRATION_IMAGE'"}]' 1.5.2.1. Verification To verify that Red Hat Advanced Cluster Management is recovered run the following commands: oc get managedclusterset oc get managedclustersetbinding -A After running the commands, the ManagedClusterSets and ManagedClusterSetBindings resources are listed without error messages. 1.6. Troubleshooting an offline cluster There are a few common causes for a cluster showing an offline status. 1.6.1. Symptom: Cluster status is offline After you complete the procedure for creating a cluster, you cannot access it from the Red Hat Advanced Cluster Management console, and it shows a status of offline . 1.6.2. Resolving the problem: Cluster status is offline Determine if the managed cluster is available. You can check this in the Clusters area of the Red Hat Advanced Cluster Management console. If it is not available, try restarting the managed cluster. If the managed cluster status is still offline, complete the following steps: Run the oc get managedcluster <cluster_name> -o yaml command on the hub cluster. Replace <cluster_name> with the name of your cluster. Find the status.conditions section. Check the messages for type: ManagedClusterConditionAvailable and resolve any problems. 1.7. Troubleshooting a managed cluster import failure If your cluster import fails, there are a few steps that you can take to determine why the cluster import failed. 1.7.1. Symptom: Imported cluster not available After you complete the procedure for importing a cluster, you cannot access it from the Red Hat Advanced Cluster Management for Kubernetes console. 1.7.2. Resolving the problem: Imported cluster not available There can be a few reasons why an imported cluster is not available after an attempt to import it. If the cluster import fails, complete the following steps, until you find the reason for the failed import: On the Red Hat Advanced Cluster Management hub cluster, run the following command to ensure that the Red Hat Advanced Cluster Management import controller is running. You should see two pods that are running. If either of the pods is not running, run the following command to view the log to determine the reason: On the Red Hat Advanced Cluster Management hub cluster, run the following command to determine if the managed cluster import secret was generated successfully by the Red Hat Advanced Cluster Management import controller: If the import secret does not exist, run the following command to view the log entries for the import controller and determine why it was not created: On the Red Hat Advanced Cluster Management hub cluster, if your managed cluster is local-cluster , provisioned by Hive, or has an auto-import secret, run the following command to check the import status of the managed cluster. If the condition ManagedClusterImportSucceeded is not true , the result of the command indicates the reason for the failure. Check the Klusterlet status of the managed cluster for a degraded condition. See Troubleshooting Klusterlet with degraded conditions to find the reason that the Klusterlet is degraded. 1.8. Troubleshooting cluster with pending import status If you receive Pending import continually on the console of your cluster, follow the procedure to troubleshoot the problem. 1.8.1. Symptom: Cluster with pending import status After importing a cluster by using the Red Hat Advanced Cluster Management console, the cluster appears in the console with a status of Pending import . 1.8.2. Identifying the problem: Cluster with pending import status Run the following command on the managed cluster to view the Kubernetes pod names that are having the issue: Run the following command on the managed cluster to find the log entry for the error: Replace registration_agent_pod with the pod name that you identified in step 1. Search the returned results for text that indicates there was a networking connectivity problem. Example includes: no such host . 1.8.3. Resolving the problem: Cluster with pending import status Retrieve the port number that is having the problem by entering the following command on the hub cluster: Ensure that the hostname from the managed cluster can be resolved, and that outbound connectivity to the host and port is occurring. If the communication cannot be established by the managed cluster, the cluster import is not complete. The cluster status for the managed cluster is Pending import . 1.9. Troubleshooting cluster with already exists error If you are unable to import an OpenShift Container Platform cluster into Red Hat Advanced Cluster Management MultiClusterHub and receive an AlreadyExists error, follow the procedure to troubleshoot the problem. 1.9.1. Symptom: Already exists error log when importing OpenShift Container Platform cluster An error log shows up when importing an OpenShift Container Platform cluster into Red Hat Advanced Cluster Management MultiClusterHub : 1.9.2. Identifying the problem: Already exists when importing OpenShift Container Platform cluster Check if there are any Red Hat Advanced Cluster Management-related resources on the cluster that you want to import to new the Red Hat Advanced Cluster Management MultiClusterHub by running the following commands: 1.9.3. Resolving the problem: Already exists when importing OpenShift Container Platform cluster Remove the klusterlet custom resource by using the following command: oc get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{"metadata":{"finalizers": []}}' Run the following commands to remove pre-existing resources: 1.10. Troubleshooting cluster creation on VMware vSphere If you experience a problem when creating a Red Hat OpenShift Container Platform cluster on VMware vSphere, see the following troubleshooting information to see if one of them addresses your problem. Note: Sometimes when the cluster creation process fails on VMware vSphere, the link is not enabled for you to view the logs. If this happens, you can identify the problem by viewing the log of the hive-controllers pod. The hive-controllers log is in the hive namespace. 1.10.1. Managed cluster creation fails with certificate IP SAN error 1.10.1.1. Symptom: Managed cluster creation fails with certificate IP SAN error After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails with an error message that indicates a certificate IP SAN error. 1.10.1.2. Identifying the problem: Managed cluster creation fails with certificate IP SAN error The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.10.1.3. Resolving the problem: Managed cluster creation fails with certificate IP SAN error Use the VMware vCenter server fully-qualified host name instead of the IP address in the credential. You can also update the VMware vCenter CA certificate to contain the IP SAN. 1.10.2. Managed cluster creation fails with unknown certificate authority 1.10.2.1. Symptom: Managed cluster creation fails with unknown certificate authority After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because the certificate is signed by an unknown authority. 1.10.2.2. Identifying the problem: Managed cluster creation fails with unknown certificate authority The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.10.2.3. Resolving the problem: Managed cluster creation fails with unknown certificate authority Ensure you entered the correct certificate from the certificate authority when creating the credential. 1.10.3. Managed cluster creation fails with expired certificate 1.10.3.1. Symptom: Managed cluster creation fails with expired certificate After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because the certificate is expired or is not yet valid. 1.10.3.2. Identifying the problem: Managed cluster creation fails with expired certificate The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.10.3.3. Resolving the problem: Managed cluster creation fails with expired certificate Ensure that the time on your ESXi hosts is synchronized. 1.10.4. Managed cluster creation fails with insufficient privilege for tagging 1.10.4.1. Symptom: Managed cluster creation fails with insufficient privilege for tagging After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is insufficient privilege to use tagging. 1.10.4.2. Identifying the problem: Managed cluster creation fails with insufficient privilege for tagging The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.10.4.3. Resolving the problem: Managed cluster creation fails with insufficient privilege for tagging Ensure that your VMware vCenter required account privileges are correct. See Image registry removed during information for more information. 1.10.5. Managed cluster creation fails with invalid dnsVIP 1.10.5.1. Symptom: Managed cluster creation fails with invalid dnsVIP After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an invalid dnsVIP. 1.10.5.2. Identifying the problem: Managed cluster creation fails with invalid dnsVIP If you see the following message when trying to deploy a new managed cluster with VMware vSphere, it is because you have an older OpenShift Container Platform release image that does not support VMware Installer Provisioned Infrastructure (IPI): 1.10.5.3. Resolving the problem: Managed cluster creation fails with invalid dnsVIP Select a release image from a later version of OpenShift Container Platform that supports VMware Installer Provisioned Infrastructure. 1.10.6. Managed cluster creation fails with incorrect network type 1.10.6.1. Symptom: Managed cluster creation fails with incorrect network type After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an incorrect network type specified. 1.10.6.2. Identifying the problem: Managed cluster creation fails with incorrect network type If you see the following message when trying to deploy a new managed cluster with VMware vSphere, it is because you have an older OpenShift Container Platform image that does not support VMware Installer Provisioned Infrastructure (IPI): 1.10.6.3. Resolving the problem: Managed cluster creation fails with incorrect network type Select a valid VMware vSphere network type for the specified VMware cluster. 1.10.7. Managed cluster creation fails with an error processing disk changes 1.10.7.1. Symptom: Adding the VMware vSphere managed cluster fails due to an error processing disk changes After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an error when processing disk changes. 1.10.7.2. Identifying the problem: Adding the VMware vSphere managed cluster fails due to an error processing disk changes A message similar to the following is displayed in the logs: 1.10.7.3. Resolving the problem: Adding the VMware vSphere managed cluster fails due to an error processing disk changes Use the VMware vSphere client to give the user All privileges for Profile-driven Storage Privileges . 1.11. Managed cluster creation fails on Red Hat OpenStack Platform with unknown authority error If you experience a problem when creating a Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform, see the following troubleshooting information to see if one of them addresses your problem. 1.11.1. Symptom: Managed cluster creation fails with unknown authority error After creating a new Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform using self-signed certificates, the cluster fails with an error message that indicates an unknown authority error. 1.11.2. Identifying the problem: Managed cluster creation fails with unknown authority error The deployment of the managed cluster fails and returns the following error message: x509: certificate signed by unknown authority 1.11.3. Resolving the problem: Managed cluster creation fails with unknown authority error Verify that the following files are configured correctly: The clouds.yaml file must specify the path to the ca.crt file in the cacert parameter. The cacert parameter is passed to the OpenShift installer when generating the ignition shim. See the following example: clouds: openstack: cacert: "/etc/pki/ca-trust/source/anchors/ca.crt" The certificatesSecretRef paremeter must reference a secret with a file name matching the ca.crt file. See the following example: spec: baseDomain: dev09.red-chesterfield.com clusterName: txue-osspoke platform: openstack: cloud: openstack credentialsSecretRef: name: txue-osspoke-openstack-creds certificatesSecretRef: name: txue-osspoke-openstack-certificatebundle To create a secret with a matching file name, run the following command: The size of the ca.cert file must be less than 63 thousand bytes. 1.12. Troubleshooting OpenShift Container Platform version 3.11 cluster import failure 1.12.1. Symptom: OpenShift Container Platform version 3.11 cluster import failure After you attempt to import a Red Hat OpenShift Container Platform version 3.11 cluster, the import fails with a log message that resembles the following content: 1.12.2. Identifying the problem: OpenShift Container Platform version 3.11 cluster import failure This often occurs because the installed version of the kubectl command-line tool is 1.11, or earlier. Run the following command to see which version of the kubectl command-line tool you are running: If the returned data lists version 1.11, or earlier, complete one of the fixes in Resolving the problem: OpenShift Container Platform version 3.11 cluster import failure . 1.12.3. Resolving the problem: OpenShift Container Platform version 3.11 cluster import failure You can resolve this issue by completing one of the following procedures: Install the latest version of the kubectl command-line tool. Download the latest version of the kubectl tool from Install and Set Up kubectl in the Kubernetes documentation. Import the cluster again after upgrading your kubectl tool. Run a file that contains the import command. Start the procedure in Importing a managed cluster with the CLI . When you create the command to import your cluster, copy that command into a YAML file named import.yaml . Run the following command to import the cluster again from the file: 1.13. Troubleshooting imported clusters offline after certificate change Installing a custom apiserver certificate is supported, but one or more clusters that were imported before you changed the certificate information are in offline status. 1.13.1. Symptom: Clusters offline after certificate change After you complete the procedure for updating a certificate secret, one or more of your clusters that were online now display offline status in the console. 1.13.2. Identifying the problem: Clusters offline after certificate change After updating the information for a custom API server certificate, clusters that were imported and running before the new certificate are now in an offline state. The errors that indicate that the certificate is the problem are found in the logs for the pods in the open-cluster-management-agent namespace of the offline managed cluster. The following examples are similar to the errors that are displayed in the logs: See the following work-agent log: See the following registration-agent log: 1.13.3. Resolving the problem: Clusters offline after certificate change If your managed cluster is the local-cluster , or your managed cluster was created by using Red Hat Advanced Cluster Management for Kubernetes, you must wait 10 minutes or longer to reimport your managed cluster. To reimport your managed cluster immediately, you can delete your managed cluster import secret on the hub cluster and reimport it by using Red Hat Advanced Cluster Management. Run the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. If you want to reimport a managed cluster that was imported by using Red Hat Advanced Cluster Management, complete the following steps to import the managed cluster again: On the hub cluster, recreate the managed cluster import secret by running the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. On the hub cluster, expose the managed cluster import secret to a YAML file by running the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. On the managed cluster, apply the import.yaml file by running the following command: Note: The steps do not detach the managed cluster from the hub cluster. The steps update the required manifests with current settings on the managed cluster, including the new certificate information. 1.14. Namespace remains after deleting a cluster When you remove a managed cluster, the namespace is normally removed as part of the cluster removal process. In rare cases, the namespace remains with some artifacts in it. In that case, you must manually remove the namespace. 1.14.1. Symptom: Namespace remains after deleting a cluster After removing a managed cluster, the namespace is not removed. 1.14.2. Resolving the problem: Namespace remains after deleting a cluster Complete the following steps to remove the namespace manually: Run the following command to produce a list of the resources that remain in the <cluster_name> namespace: Replace cluster_name with the name of the namespace for the cluster that you attempted to remove. Delete each identified resource on the list that does not have a status of Delete by entering the following command to edit the list: Replace resource_kind with the kind of the resource. Replace resource_name with the name of the resource. Replace namespace with the name of the namespace of the resource. Locate the finalizer attribute in the in the metadata. Delete the non-Kubernetes finalizers by using the vi editor dd command. Save the list and exit the vi editor by entering the :wq command. Delete the namespace by entering the following command: Replace cluster-name with the name of the namespace that you are trying to delete. 1.15. Auto-import-secret-exists error when importing a cluster Your cluster import fails with an error message that reads: auto import secret exists. 1.15.1. Symptom: Auto import secret exists error when importing a cluster When importing a hive cluster for management, an auto-import-secret already exists error is displayed. 1.15.2. Resolving the problem: Auto-import-secret-exists error when importing a cluster This problem occurs when you attempt to import a cluster that was previously managed by Red Hat Advanced Cluster Management. When this happens, the secrets conflict when you try to reimport the cluster. To work around this problem, complete the following steps: To manually delete the existing auto-import-secret , run the following command on the hub cluster: Replace cluster-namespace with the namespace of your cluster. Import your cluster again by using the procedure in Cluster import introduction . 1.16. Troubleshooting the cinder Container Storage Interface (CSI) driver for VolSync If you use VolSync or use a default setting in a cinder Container Storage Interface (CSI) driver, you might encounter errors for the PVC that is in use. 1.16.1. Symptom: Volumesnapshot error state You can configure a VolSync ReplicationSource or ReplicationDestination to use snapshots. Also, you can configure the storageclass and volumesnapshotclass in the ReplicationSource and ReplicationDestination . There is a parameter on the cinder volumesnapshotclass called force-create with a default value of false . This force-create parameter on the volumesnapshotclass means cinder does not allow the volumesnapshot to be taken of a PVC in use. As a result, the volumesnapshot is in an error state. 1.16.2. Resolving the problem: Setting the parameter to true Create a new volumesnapshotclass for the cinder CSI driver. Change the paramater, force-create , to true . See the following sample YAML: apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: cinder.csi.openstack.org kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: 'true' name: standard-csi parameters: force-create: 'true' 1.17. Troubleshooting with the must-gather command 1.17.1. Symptom: Errors with multicluster global hub You might experience various errors with multicluster global hub. You can run the must-gather command for troubleshooting issues with multicluster global hub. 1.17.2. Resolving the problem: Running the must-gather command for dubugging Run the must-gather command to gather details, logs, and take steps in debugging issues. This debugging information is also useful when you open a support request. The oc adm must-gather CLI command collects the information from your cluster that is often needed for debugging issues, including: Resource definitions Service logs 1.17.2.1. Prerequisites You must meet the following prerequisites to run the must-gather command: Access to the global hub and managed hub clusters as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. 1.17.2.2. Running the must-gather command Complete the following procedure to collect information by using the must-gather command: Learn about the must-gather command and install the prerequisites that you need by reading the Gathering data about your cluster in the OpenShift Container Platform documentation. Log in to your global hub cluster. For the typical use case, run the following command while you are logged into your global hub cluster: If you want to check your managed hub clusters, run the must-gather command on those clusters. Optional: If you want to save the results in a the SOMENAME directory, you can run the following command instead of the one in the step: You can specify a different name for the directory. Note: The command includes the required additions to create a gzipped tarball file. The following information is collected from the must-gather command: Two peer levels: cluster-scoped-resources and namespaces resources. Sub-level for each: API group for the custom resource definitions for both cluster-scope and namespace-scoped resources. level for each: YAML file sorted by kind. For the global hub cluster, you can check the PostgresCluster and Kafka in the namespaces resources. For the global hub cluster, you can check the multicluster global hub related pods and logs in pods of namespaces resources. For the managed hub cluster, you can check the multicluster global hub agent pods and logs in pods of namespaces resources. 1.18. Troubleshooting by accessing the PostgreSQL database 1.18.1. Symptom: Errors with multicluster global hub You might experience various errors with multicluster global hub. You can access the provisioned PostgreSQL database to view messages that might be helpful for troubleshooting issues with multicluster global hub. 1.18.2. Resolving the problem: Accessing the PostgresSQL database Using the ClusterIP service LoadBalancer Expose the service type to LoadBalancer provisioned by default: Run the following command to get your the credentials: Expose the service type to LoadBalancer provisioned by crunchy operator: Run the following command to get your the credentials: 1.19. Troubleshooting by using the database dump and restore In a production environment, back up your PostgreSQL database regularly as a database management task. The backup can also be used for debugging the multicluster global hub. 1.19.1. Symptom: Errors with multicluster global hub You might experience various errors with multicluster global hub. You can use the database dump and restore for troubleshooting issues with multicluster global hub. 1.19.2. Resolving the problem: Dumping the output of the database for dubugging Sometimes you need to dump the output in the multicluster global hub database to debug a problem. The PostgreSQL database provides the pg_dump command line tool to dump the contents of the database. To dump data from localhost database server, run the following command: To dump the multicluster global hub database located on a remote server with compressed format, use the command-line options to control the connection details, as shown in the following example: 1.19.3. Resolving the problem: Restore database from dump To restore a PostgreSQL database, you can use the psql or pg_restore command line tools. The psql tool is used to restore plain text files created by pg_dump : The pg_restore tool is used to restore a PostgreSQL database from an archive created by pg_dump in one of the non-plain-text formats (custom, tar, or directory): 1.20. Troubleshooting cluster status changing from offline to available The status of the managed cluster alternates between offline and available without any manual change to the environment or cluster. 1.20.1. Symptom: Cluster status changing from offline to available When the network that connects the managed cluster to the hub cluster is unstable, the status of the managed cluster that is reported by the hub cluster cycles between offline and available . The connection between the hub cluster and managed cluster is maintained through a lease that is validated at the leaseDurationSeconds interval value. If the lease is not validated within five consecutive attempts of the leaseDurationSeconds value, then the cluster is marked offline . For example, the cluster is marked offline after five minutes with a leaseDurationSeconds interval of 60 seconds . This configuration can be inadequate for reasons such as connectivity issues or latency, causing instability. 1.20.2. Resolving the problem: Cluster status changing from offline to available The five validation attempts is default and cannot be changed, but you can change the leaseDurationSeconds interval. Determine the amount of time, in minutes, that you want the cluster to be marked as offline , then multiply that value by 60 to convert to seconds. Then divide by the default five number of attempts. The result is your leaseDurationSeconds value. Edit your ManagedCluster specification on the hub cluster by entering the following command, but replace cluster-name with the name of your managed cluster: Increase the value of leaseDurationSeconds in your ManagedCluster specification, as seen in the following sample YAML: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60 Save and apply the file. 1.21. Troubleshooting cluster in console with pending or failed status If you observe Pending status or Failed status in the console for a cluster you created, follow the procedure to troubleshoot the problem. 1.21.1. Symptom: Cluster in console with pending or failed status After creating a new cluster by using the Red Hat Advanced Cluster Management for Kubernetes console, the cluster does not progress beyond the status of Pending or displays Failed status. 1.21.2. Identifying the problem: Cluster in console with pending or failed status If the cluster displays Failed status, navigate to the details page for the cluster and follow the link to the logs provided. If no logs are found or the cluster displays Pending status, continue with the following procedure to check for logs: Procedure 1 Run the following command on the hub cluster to view the names of the Kubernetes pods that were created in the namespace for the new cluster: Replace new_cluster_name with the name of the cluster that you created. If no pod that contains the string provision in the name is listed, continue with Procedure 2. If there is a pod with provision in the title, run the following command on the hub cluster to view the logs of that pod: Replace new_cluster_name_provision_pod_name with the name of the cluster that you created, followed by the pod name that contains provision . Search for errors in the logs that might explain the cause of the problem. Procedure 2 If there is not a pod with provision in its name, the problem occurred earlier in the process. Complete the following procedure to view the logs: Run the following command on the hub cluster: Replace new_cluster_name with the name of the cluster that you created. For more information about cluster installation logs, see Gathering installation logs in the Red Hat OpenShift documentation. See if there is additional information about the problem in the Status.Conditions.Message and Status.Conditions.Reason entries of the resource. 1.21.3. Resolving the problem: Cluster in console with pending or failed status After you identify the errors in the logs, determine how to resolve the errors before you destroy the cluster and create it again. The following example provides a possible log error of selecting an unsupported zone, and the actions that are required to resolve it: When you created your cluster, you selected one or more zones within a region that are not supported. Complete one of the following actions when you recreate your cluster to resolve the issue: Select a different zone within the region. Omit the zone that does not provide the support, if you have other zones listed. Select a different region for your cluster. After determining the issues from the log, destroy the cluster and recreate it. See Cluster creation introduction for more information about creating a cluster. 1.22. Troubleshooting Grafana When you query some time-consuming metrics in the Grafana explorer, you might encounter a Gateway Time-out error. 1.22.1. Symptom: Grafana explorer gateway timeout If you hit the Gateway Time-out error when you query some time-consuming metrics in the Grafana explorer, it is possible that the timeout is caused by the Grafana in the open-cluster-management-observability namespace. 1.22.2. Resolving the problem: Configure the Grafana If you have this problem, complete the following steps: Verify that the default configuration of Grafana has expected timeout settings: To verify that the default timeout setting of Grafana, run the following command: The following timeout settings should be displayed: To verify the default data source query timeout for Grafana, run the following command: The following timeout settings should be displayed: If you verified the default configuration of Grafana has expected timeout settings, then you can configure the Grafana in the open-cluster-management-observability namespace by running the following command: Refresh the Grafana page and try to query the metrics again. The Gateway Time-out error is no longer displayed. 1.23. Troubleshooting local cluster not selected with placement rule The managed clusters are selected with a placement rule, but the local-cluster , which is a hub cluster that is also managed, is not selected. The placement rule user is not granted permission to get the managedcluster resources in the local-cluster namespace. 1.23.1. Symptom: Troubleshooting local cluster not selected as a managed cluster All managed clusters are selected with a placement rule, but the local-cluster is not. The placement rule user is not granted permission to get the managedcluster resources in the local-cluster namespace. 1.23.2. Resolving the problem: Troubleshooting local cluster not selected as a managed cluster Deprecated: PlacementRule To resolve this issue, you need to grant the managedcluster administrative permission in the local-cluster namespace. Complete the following steps: Confirm that the list of managed clusters does include local-cluster , and that the placement rule decisions list does not display the local-cluster . Run the following command and view the results: See in the sample output that local-cluster is joined, but it is not in the YAML for PlacementRule : apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-ready-clusters namespace: default spec: clusterSelector: {} status: decisions: - clusterName: cluster1 clusterNamespace: cluster1 Create a Role in your YAML file to grant the managedcluster administrative permission in the local-cluster namespace. See the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: managedcluster-admin-user-zisis namespace: local-cluster rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - get Create a RoleBinding resource to grant the placement rule user access to the local-cluster namespace. See the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: managedcluster-admin-user-zisis namespace: local-cluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: managedcluster-admin-user-zisis namespace: local-cluster subjects: - kind: User name: zisis apiGroup: rbac.authorization.k8s.io 1.24. Troubleshooting application Kubernetes deployment version A managed cluster with a deprecated Kubernetes apiVersion might not be supported. See the Kubernetes issue for more details about the deprecated API version. 1.24.1. Symptom: Application deployment version If one or more of your application resources in the Subscription YAML file uses the deprecated API, you might receive an error similar to the following error: Or with new Kubernetes API version in your YAML file named old.yaml for instance, you might receive the following error: 1.24.2. Resolving the problem: Application deployment version Update the apiVersion in the resource. For example, if the error displays for Deployment kind in the subscription YAML file, you need to update the apiVersion from extensions/v1beta1 to apps/v1 . See the following example: apiVersion: apps/v1 kind: Deployment Verify the available versions by running the following command on the managed cluster: Check for VERSION . 1.25. Troubleshooting Klusterlet with degraded conditions The Klusterlet degraded conditions can help to diagnose the status of Klusterlet agents on managed cluster. If a Klusterlet is in the degraded condition, the Klusterlet agents on managed cluster might have errors that need to be troubleshooted. See the following information for Klusterlet degraded conditions that are set to True . 1.25.1. Symptom: Klusterlet is in the degraded condition After deploying a Klusterlet on managed cluster, the KlusterletRegistrationDegraded or KlusterletWorkDegraded condition displays a status of True . 1.25.2. Identifying the problem: Klusterlet is in the degraded condition Run the following command on the managed cluster to view the Klusterlet status: Check KlusterletRegistrationDegraded or KlusterletWorkDegraded to see if the condition is set to True . Proceed to Resolving the problem for any degraded conditions that are listed. 1.25.3. Resolving the problem: Klusterlet is in the degraded condition See the following list of degraded statuses and how you can attempt to resolve those issues: If the KlusterletRegistrationDegraded condition with a status of True and the condition reason is: BootStrapSecretMissing , you need create a bootstrap secret on open-cluster-management-agent namespace. If the KlusterletRegistrationDegraded condition displays True and the condition reason is a BootstrapSecretError , or BootstrapSecretUnauthorized , then the current bootstrap secret is invalid. Delete the current bootstrap secret and recreate a valid bootstrap secret on open-cluster-management-agent namespace. If the KlusterletRegistrationDegraded and KlusterletWorkDegraded displays True and the condition reason is HubKubeConfigSecretMissing , delete the Klusterlet and recreate it. If the KlusterletRegistrationDegraded and KlusterletWorkDegraded displays True and the condition reason is: ClusterNameMissing , KubeConfigMissing , HubConfigSecretError , or HubConfigSecretUnauthorized , delete the hub cluster kubeconfig secret from open-cluster-management-agent namespace. The registration agent will bootstrap again to get a new hub cluster kubeconfig secret. If the KlusterletRegistrationDegraded displays True and the condition reason is GetRegistrationDeploymentFailed , or UnavailableRegistrationPod , you can check the condition message to get the problem details and attempt to resolve. If the KlusterletWorkDegraded displays True and the condition reason is GetWorkDeploymentFailed ,or UnavailableWorkPod , you can check the condition message to get the problem details and attempt to resolve. 1.26. Troubleshooting Object storage channel secret If you change the SecretAccessKey , the subscription of an Object storage channel cannot pick up the updated secret automatically and you receive an error. 1.26.1. Symptom: Object storage channel secret The subscription of an Object storage channel cannot pick up the updated secret automatically. This prevents the subscription operator from reconciliation and deploys resources from Object storage to the managed cluster. 1.26.2. Resolving the problem: Object storage channel secret You need to manually input the credentials to create a secret, then refer to the secret within a channel. Annotate the subscription CR in order to generate a reconcile single to subscription operator. See the following data specification: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: deva namespace: ch-obj labels: name: obj-sub spec: type: ObjectBucket pathname: http://ec2-100-26-232-156.compute-1.amazonaws.com:9000/deva sourceNamespaces: - default secretRef: name: dev --- apiVersion: v1 kind: Secret metadata: name: dev namespace: ch-obj labels: name: obj-sub data: AccessKeyID: YWRtaW4= SecretAccessKey: cGFzc3dvcmRhZG1pbg== Run oc annotate to test: After you run the command, you can go to the Application console to verify that the resource is deployed to the managed cluster. Or you can log in to the managed cluster to see if the application resource is created at the given namespace. 1.27. Troubleshooting observability After you install the observability component, the component might be stuck and an Installing status is displayed. 1.27.1. Symptom: MultiClusterObservability resource status stuck If the observability status is stuck in an Installing status after you install and create the Observability custom resource definition (CRD), it is possible that there is no value defined for the spec:storageConfig:storageClass parameter. Alternatively, the observability component automatically finds the default storageClass , but if there is no value for the storage, the component remains stuck with the Installing status. 1.27.2. Resolving the problem: MultiClusterObservability resource status stuck If you have this problem, complete the following steps: Verify that the observability components are installed: To verify that the multicluster-observability-operator , run the following command: To verify that the appropriate CRDs are present, run the following command: The following CRDs must be displayed before you enable the component: If you create your own storageClass for a Bare Metal cluster, see Persistent storage using NFS . To ensure that the observability component can find the default storageClass, update the storageClass parameter in the multicluster-observability-operator custom resource definition. Your parameter might resemble the following value: The observability component status is updated to a Ready status when the installation is complete. If the installation fails to complete, the Fail status is displayed. 1.28. Troubleshooting OpenShift monitoring service Observability service in a managed cluster needs to scrape metrics from the OpenShift Container Platform monitoring stack. The metrics-collector is not installed if the OpenShift Container Platform monitoring stack is not ready. 1.28.1. Symptom: OpenShift monitoring service is not ready The endpoint-observability-operator-x pod checks if the prometheus-k8s service is available in the openshift-monitoring namespace. If the service is not present in the openshift-monitoring namespace, then the metrics-collector is not deployed. You might receive the following error message: Failed to get prometheus resource . 1.28.2. Resolving the problem: OpenShift monitoring service is not ready If you have this problem, complete the following steps: Log in to your OpenShift Container Platform cluster. Access the openshift-monitoring namespace to verify that the prometheus-k8s service is available. Restart endpoint-observability-operator-x pod in the open-cluster-management-addon-observability namespace of the managed cluster. 1.29. Troubleshooting metrics-collector When the observability-client-ca-certificate secret is not refreshed in the managed cluster, you might receive an internal server error. 1.29.1. Symptom: metrics-collector cannot verify observability-client-ca-certificate There might be a managed cluster, where the metrics are unavailable. If this is the case, you might receive the following error from the metrics-collector deployment: 1.29.2. Resolving the problem: metrics-collector cannot verify observability-client-ca-certificate If you have this problem, complete the following steps: Log in to your managed cluster. Delete the secret named, observability-controller-open-cluster-management.io-observability-signer-client-cert that is in the open-cluster-management-addon-observability namespace. Run the following command: Note: The observability-controller-open-cluster-management.io-observability-signer-client-cert is automatically recreated with new certificates. The metrics-collector deployment is recreated and the observability-controller-open-cluster-management.io-observability-signer-client-cert secret is updated. 1.30. Troubleshooting PostgreSQL shared memory error If you have a large environment, you might encounter a PostgreSQL shared memory error that impacts search results and the topology view for applications. 1.30.1. Symptom: PostgreSQL shared memory error An error message resembling the following appears in the search-api logs: ERROR: could not resize shared memory segment "/PostgreSQL.1083654800" to 25031264 bytes: No space left on device (SQLSTATE 53100) 1.30.2. Resolving the problem: PostgreSQL shared memory error To resolve the issue, update the PostgreSQL resources found in the search-postgres ConfigMap. Complete the following steps to update the resources: Run the following command to switch to the open-cluster-management project: Increase the search-postgres pod memory. The following command increases the memory to 16Gi : Run the following command to prevent the search operator from overwriting your changes: Run the following command to update the resources in the search-postgres YAML file: See the following example for increasing resources: postgresql.conf: |- work_mem = '128MB' # Higher values allocate more memory max_parallel_workers_per_gather = '0' # Disables parallel queries shared_buffers = '1GB' # Higher values allocate more memory Make sure to save your changes before exiting. Run the following command to restart the postgres and api pod. To verify your changes, open the search-postgres YAML file and confirm that the changes you made to postgresql.conf: are present by running the following command: See Search customization and configurations for more information on adding environment variables. 1.31. Troubleshooting Thanos compactor halts You might receive an error message that the compactor is halted. This can occur when there are corrupted blocks or when there is insufficient space on the Thanos compactor persistent volume claim (PVC). 1.31.1. Symptom: Thanos compactor halts The Thanos compactor halts because there is no space left on your persistent volume claim (PVC). You receive the following message: ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@5827190780573537664: compact blocks [ /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE]: 2 errors: populate block: add series: write series data: write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device; write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device" 1.31.2. Resolving the problem: Thanos compactor halts To resolve the problem, increase the storage space of the Thanos compactor PVC. Complete the following steps: Increase the storage space for the data-observability-thanos-compact-0 PVC. See Increasing and decreasing persistent volumes and persistent volume claims for more information. Restart the observability-thanos-compact pod by deleting the pod. The new pod is automatically created and started. oc delete pod observability-thanos-compact-0 -n open-cluster-management-observability After you restart the observability-thanos-compact pod, check the acm_thanos_compact_todo_compactions metric. As the Thanos compactor works through the backlog, the metric value decreases. Confirm that the metric changes in a consistent cycle and check the disk usage. Then you can reattempt to decrease the PVC again. Note: This might take several weeks. 1.31.3. Symptom: Thanos compactor halts The Thanos compactor halts because you have corrupted blocks. You might receive the following output where the 01HKZYEZ2DVDQXF1STVEXAMPLE block is corrupted: ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@15699422364132557315: compact blocks [/var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZQK7TD06J2XWGR5EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZYEZ2DVDQXF1STVEXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HM05APAHXBQSNC0N5EXAMPLE]: populate block: chunk iter: cannot populate chunk 8 from block 01HKZYEZ2DVDQXF1STVEXAMPLE: segment index 0 out of range" 1.31.4. Resolving the problem: Thanos compactor halts Add the thanos bucket verify command to the object storage configuration. Complete the following steps: Resolve the block error by adding the thanos bucket verify command to the object storage configuration. Set the configuration in the observability-thanos-compact pod by using the following commands: oc rsh observability-thanos-compact-0 [..] thanos tools bucket verify -r --objstore.config="USDOBJSTORE_CONFIG" --objstore-backup.config="USDOBJSTORE_CONFIG" --id=01HKZYEZ2DVDQXF1STVEXAMPLE If the command does not work, you must mark the block for deletion because it might be corrupted. Run the following commands: thanos tools bucket mark --id "01HKZYEZ2DVDQXF1STVEXAMPLE" --objstore.config="USDOBJSTORE_CONFIG" --marker=deletion-mark.json --details=DELETE If you are blocked for deletion, clean up the marked blocks by running the following command: thanos tools bucket cleanup --objstore.config="USDOBJSTORE_CONFIG" 1.32. Troubleshooting Submariner not connecting after installation If Submariner does not run correctly after you configure it, complete the following steps to diagnose the issue. 1.32.1. Symptom: Submariner not connecting after installation Your Submariner network is not communicating after installation. 1.32.2. Identifying the problem: Submariner not connecting after installation If the network connectivity is not established after deploying Submariner, begin the troubleshooting steps. Note that it might take several minutes for the processes to complete when you deploy Submariner. 1.32.3. Resolving the problem: Submariner not connecting after installation When Submariner does not run correctly after deployment, complete the following steps: Check for the following requirements to determine whether the components of Submariner deployed correctly: The submariner-addon pod is running in the open-cluster-management namespace of your hub cluster. The following pods are running in the submariner-operator namespace of each managed cluster: submariner-addon submariner-gateway submariner-routeagent submariner-operator submariner-globalnet (only if Globalnet is enabled in the ClusterSet) submariner-lighthouse-agent submariner-lighthouse-coredns submariner-networkplugin-syncer (only if the specified CNI value is OVNKubernetes ) submariner-metrics-proxy Run the subctl diagnose all command to check the status of the required pods, with the exception of the submariner-addon pods. Make sure to run the must-gather command to collect logs that can help with debugging issues. 1.33. Troubleshooting Submariner add-on status is degraded After adding the Submariner add-on to the clusters in your cluster set, the status in the Connection status , Agent status , and Gateway nodes show unexpected status for the clusters. 1.33.1. Symptom: Submariner add-on status is degraded You add the Submariner add-on to the clusters in your cluster set, the following status is shown in the Gateway nodes , Agent status , and Connection status for the clusters: Gateway nodes labeled Progressing : The process to label the gateway nodes started. Nodes not labeled : The gateway nodes are not labeled, possibly because the process to label them has not completed. Nodes not labeled : The gateway nodes are not yet labeled, possibly because the process is waiting for another process to finish. Nodes labeled: The gateway nodes have been labeled. Agent status Progressing: The installation of the Submariner agent started. Degraded: The Submariner agent is not running correctly, possibly because it is still in progress. Connection status Progressing: The process to establish a connection with the Submariner add-on started. Degraded: The connection is not ready. If you just installed the add-on, the process might still be in progress. If it was after the connection has already been established and running, then two clusters have lost the connection to each other. When there are multiple clusters, all clusters display a Degraded status if any of the clusters is in adisconnected state. It will also show which clusters are connected, and which ones are disconnected. 1.33.2. Resolving the problem: Submariner add-on status is degraded The degraded status often resolves itself as the processes complete. You can see the current step of the process by clicking the status in the table. You can use that information to determine whether the process is finished and you need to take other troubleshooting steps. For an issue that does not resolve itself, complete the following steps to troubleshoot the problem: You can use the diagnose command with the subctl utility to run some tests on the Submariner connections when the following conditions exist: The Agent status or Connection status is in a Degraded state. The diagnose command provides detailed analysis about the issue. Everything is green in console, but the networking connections are not working correctly. The diagnose command helps to confirm that there are no other connectivity or deployment issues outside of the console. It is considered best practice to run the diagnostics command after any deployment to identify issues. See diagnose in the Submariner for more information about how to run the command. If a problem continues with the Connection status , you can start by running the diagnose command of the subctl utility tool to get a more detailed status for the connection between two Submariner clusters. The format for the command is: Replace path-to-kubeconfig-file with the path to the kubeconfig file. See diagnose in the Submariner documentation for more information about the command. Check the firewall settings. Sometimes a problem with the connection is caused by firewall permissions issues that prevent the clusters from communicating. This can cause the Connection status to show as degraded. Run the following command to check the firewall issues: Replace path-to-local-kubeconfig with the path to the kubeconfig file of one of the clusters. Replace path-to-remote-kubeconfig with the path to the kubeconfig file of the other cluster. you can run the verify command with your subctl utility tool to test the connection between two Submariner clusters. The basic format for the command is: If a problem continues with the Connection status , you can run the verify command with your subctl utility tool to test the connection between two Submariner clusters. The basic format for the command is: Replace cluster1 and cluster2 with the names of the clusters that you are testing. See verify in the Submariner documentation for more information about the command. After the troubleshooting steps resolve the issue, use the benchmark command with the subctl tool to establish a base on which to compare when you run additional diagnostics. See benchmark in the Submariner documentation for additional information about the options for the command. 1.34. Troubleshooting restore status finishes with errors After you restore a backup, resources are restored correctly but the Red Hat Advanced Cluster Management restore resource shows a FinishedWithErrors status. 1.34.1. Symptom: Troubleshooting restore status finishes with errors Red Hat Advanced Cluster Management shows a FinishedWithErrors status and one or more of the Velero restore resources created by the Red Hat Advanced Cluster Management restore show a PartiallyFailed status. 1.34.2. Resolving the problem: Troubleshooting restore status finishes with errors If you restore from a backup that is empty, you can safely ignore the FinishedWithErrors status. Red Hat Advanced Cluster Management for Kubernetes restore shows a cumulative status for all Velero restore resources. If one status is PartiallyFailed and the others are Completed , the cumulative status you see is PartiallyFailed to notify you that there is at least one issue. To resolve the issue, check the status for all individual Velero restore resources with a PartiallyFailed status and view the logs for more details. You can get the log from the object storage directly, or download it from the OADP Operator by using the DownloadRequest custom resource. To create a DownloadRequest from the console, complete the following steps: Navigate to Operators > Installed Operators > Create DownloadRequest . Select BackupLog as your Kind and follow the console instructions to complete the DownloadRequest creation. 1.35. Troubleshooting multiline YAML parsing When you want to use the fromSecret function to add contents of a Secret resource into a Route resource, the contents are displayed incorrectly. 1.35.1. Symptom: Troubleshooting multiline YAML parsing When the managed cluster and hub cluster are the same cluster the certificate data is redacted, so the contents are not parsed as a template JSON string. You might receive the following error messages: message: >- [spec.tls.caCertificate: Invalid value: "redacted ca certificate data": failed to parse CA certificate: data does not contain any valid RSA or ECDSA certificates, spec.tls.certificate: Invalid value: "redacted certificate data": data does not contain any valid RSA or ECDSA certificates, spec.tls.key: Invalid value: "": no key specified] 1.35.2. Resolving the problem: Troubleshooting multiline YAML parsing Configure your certificate policy to retrieve the hub cluster and managed cluster fromSecret values. Use the autoindent function to update your certificate policy with the following content: tls: certificate: | {{ print "{{hub fromSecret "open-cluster-management" "minio-cert" "tls.crt" hub}}" | base64dec | autoindent }} 1.36. Troubleshooting ClusterCurator automatic template failure to deploy If you are using the ClusterCurator automatic template and it fails to deploy, follow the procedure to troubleshoot the problem. 1.36.1. Symptom: ClusterCurator automatic template failure to deploy You are unable to deploy managed clusters by using the ClusterCurator automatic template. The process might become stuck on the posthooks and might not create any logs. 1.36.2. Resolving the problem: ClusterCurator automatic template failure to deploy Complete the following steps to identify and resolve the problem: Check the ClusterCurator resource status in the cluster namespace for any messages or errors. In the Job resource named, curator-job-* , which is in the same cluster namespace as the step, check the pod log for any errors. Note: The job is removed after one hour due to a one hour time to live (TTL) setting. | [
"adm must-gather --image=registry.redhat.io/rhacm2/acm-must-gather-rhel9:v2.11 --dest-dir=<directory>",
"<your-directory>/cluster-scoped-resources/gather-managed.log>",
"REGISTRY=<internal.repo.address:port> IMAGE1=USDREGISTRY/rhacm2/acm-must-gather-rhel9:v<2.x> adm must-gather --image=USDIMAGE1 --dest-dir=<directory>",
"adm must-gather --image=quay.io/stolostron/backplane-must-gather:SNAPSHOTNAME /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME",
"adm must-gather --image=quay.io/stolostron/backplane-must-gather:SNAPSHOTNAME /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=NAME ; tar -cvzf NAME.tgz NAME",
"REGISTRY=registry.example.com:5000 IMAGE=USDREGISTRY/multicluster-engine/must-gather-rhel8@sha256:ff9f37eb400dc1f7d07a9b6f2da9064992934b69847d17f59e385783c071b9d8 adm must-gather --image=USDIMAGE /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=./data",
"reason: Unschedulable message: '0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.'",
"#!/bin/bash ACM_NAMESPACE=<namespace> delete mch --all -n USDACM_NAMESPACE delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io delete clusterimageset --all delete clusterrole multiclusterengines.multicluster.openshift.io-v1-admin multiclusterengines.multicluster.openshift.io-v1-crdview multiclusterengines.multicluster.openshift.io-v1-edit multiclusterengines.multicluster.openshift.io-v1-view open-cluster-management:addons:application-manager open-cluster-management:admin-aggregate open-cluster-management:cert-policy-controller-hub open-cluster-management:cluster-manager-admin-aggregate open-cluster-management:config-policy-controller-hub open-cluster-management:edit-aggregate open-cluster-management:iam-policy-controller-hub open-cluster-management:policy-framework-hub open-cluster-management:view-aggregate delete crd klusterletaddonconfigs.agent.open-cluster-management.io placementbindings.policy.open-cluster-management.io policies.policy.open-cluster-management.io userpreferences.console.open-cluster-management.io discoveredclusters.discovery.open-cluster-management.io discoveryconfigs.discovery.open-cluster-management.io delete mutatingwebhookconfiguration ocm-mutating-webhook managedclustermutators.admission.cluster.open-cluster-management.io multicluster-observability-operator delete validatingwebhookconfiguration channels.apps.open.cluster.management.webhook.validator application-webhook-validator multiclusterhub-operator-validating-webhook ocm-validating-webhook multicluster-observability-operator multiclusterengines.multicluster.openshift.io",
"Error from server: request to convert CR from an invalid group/version: cluster.open-cluster-management.io/v1beta1",
"annotate mce multiclusterengine pause=true",
"patch deployment cluster-manager -n multicluster-engine -p \\ '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"registration-operator\",\"image\":\"registry.redhat.io/multicluster-engine/registration-operator-rhel8@sha256:35999c3a1022d908b6fe30aa9b85878e666392dbbd685e9f3edcb83e3336d19f\"}]}}}}' export ORIGIN_REGISTRATION_IMAGE=USD(oc get clustermanager cluster-manager -o jsonpath='{.spec.registrationImagePullSpec}')",
"patch clustermanager cluster-manager --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/registrationImagePullSpec\", \"value\": \"registry.redhat.io/multicluster-engine/registration-rhel8@sha256:a3c22aa4326859d75986bf24322068f0aff2103cccc06e1001faaf79b9390515\"}]'",
"annotate crds managedclustersets.cluster.open-cluster-management.io operator.open-cluster-management.io/version- annotate crds managedclustersetbindings.cluster.open-cluster-management.io operator.open-cluster-management.io/version-",
"-n multicluster-engine delete pods -l app=cluster-manager wait crds managedclustersets.cluster.open-cluster-management.io --for=jsonpath=\"{.metadata.annotations['operator\\.open-cluster-management\\.io/version']}\"=\"2.3.3\" --timeout=120s wait crds managedclustersetbindings.cluster.open-cluster-management.io --for=jsonpath=\"{.metadata.annotations['operator\\.open-cluster-management\\.io/version']}\"=\"2.3.3\" --timeout=120s",
"patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' -p='[{\"op\":\"replace\", \"path\":\"/spec/resource/version\", \"value\":\"v1beta1\"}]' patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' --subresource status -p='[{\"op\":\"remove\", \"path\":\"/status/conditions\"}]' patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' -p='[{\"op\":\"replace\", \"path\":\"/spec/resource/version\", \"value\":\"v1beta1\"}]' patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' --subresource status -p='[{\"op\":\"remove\", \"path\":\"/status/conditions\"}]'",
"wait storageversionmigration managedclustersets.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s wait storageversionmigration managedclustersetbindings.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s",
"annotate mce multiclusterengine pause- patch clustermanager cluster-manager --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/registrationImagePullSpec\", \"value\": \"'USDORIGIN_REGISTRATION_IMAGE'\"}]'",
"get managedclusterset get managedclustersetbinding -A",
"-n multicluster-engine get pods -l app=managedcluster-import-controller-v2",
"-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1",
"-n <managed_cluster_name> get secrets <managed_cluster_name>-import",
"-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1 | grep importconfig-controller",
"get managedcluster <managed_cluster_name> -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}' | grep ManagedClusterImportSucceeded",
"get pod -n open-cluster-management-agent | grep klusterlet-registration-agent",
"logs <registration_agent_pod> -n open-cluster-management-agent",
"get infrastructure cluster -o yaml | grep apiServerURL",
"error log: Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition Error from server (AlreadyExists): error when creating \"STDIN\": customresourcedefinitions.apiextensions.k8s.io \"klusterlets.operator.open-cluster-management.io\" already exists The cluster cannot be imported because its Klusterlet CRD already exists. Either the cluster was already imported, or it was not detached completely during a previous detach process. Detach the existing cluster before trying the import again.\"",
"get all -n open-cluster-management-agent get all -n open-cluster-management-agent-addon",
"get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{\"metadata\":{\"finalizers\": []}}'",
"delete namespaces open-cluster-management-agent open-cluster-management-agent-addon --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc delete crds --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc patch crds --type=merge -p '{\"metadata\":{\"finalizers\": []}}'",
"time=\"2020-08-07T15:27:55Z\" level=error msg=\"Error: error setting up new vSphere SOAP client: Post https://147.1.1.1/sdk: x509: cannot validate certificate for xx.xx.xx.xx because it doesn't contain any IP SANs\" time=\"2020-08-07T15:27:55Z\" level=error",
"Error: error setting up new vSphere SOAP client: Post https://vspherehost.com/sdk: x509: certificate signed by unknown authority\"",
"x509: certificate has expired or is not yet valid",
"time=\"2020-08-07T19:41:58Z\" level=debug msg=\"vsphere_tag_category.category: Creating...\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\"Error: could not create category: POST https://vspherehost.com/rest/com/vmware/cis/tagging/category: 403 Forbidden\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\" on ../tmp/openshift-install-436877649/main.tf line 54, in resource \\\"vsphere_tag_category\\\" \\\"category\\\":\" time=\"2020-08-07T19:41:58Z\" level=error msg=\" 54: resource \\\"vsphere_tag_category\\\" \\\"category\\\" {\"",
"failed to fetch Master Machines: failed to load asset \\\\\\\"Install Config\\\\\\\": invalid \\\\\\\"install-config.yaml\\\\\\\" file: platform.vsphere.dnsVIP: Invalid value: \\\\\\\"\\\\\\\": \\\\\\\"\\\\\\\" is not a valid IP",
"time=\"2020-08-11T14:31:38-04:00\" level=debug msg=\"vsphereprivate_import_ova.import: Creating...\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error msg=\"Error: rpc error: code = Unavailable desc = transport is closing\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=fatal msg=\"failed to fetch Cluster: failed to generate asset \\\"Cluster\\\": failed to create cluster: failed to apply Terraform: failed to complete the change\"",
"ERROR ERROR Error: error reconfiguring virtual machine: error processing disk changes post-clone: disk.0: ServerFaultCode: NoPermission: RESOURCE (vm-71:2000), ACTION (queryAssociatedProfile): RESOURCE (vm-71), ACTION (PolicyIDByVirtualDisk)",
"clouds: openstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt\"",
"spec: baseDomain: dev09.red-chesterfield.com clusterName: txue-osspoke platform: openstack: cloud: openstack credentialsSecretRef: name: txue-osspoke-openstack-creds certificatesSecretRef: name: txue-osspoke-openstack-certificatebundle",
"create secret generic txue-osspoke-openstack-certificatebundle --from-file=ca.crt=ca.crt.pem -n USDCLUSTERNAME",
"customresourcedefinition.apiextensions.k8s.io/klusterlets.operator.open-cluster-management.io configured clusterrole.rbac.authorization.k8s.io/klusterlet configured clusterrole.rbac.authorization.k8s.io/open-cluster-management:klusterlet-admin-aggregate-clusterrole configured clusterrolebinding.rbac.authorization.k8s.io/klusterlet configured namespace/open-cluster-management-agent configured secret/open-cluster-management-image-pull-credentials unchanged serviceaccount/klusterlet configured deployment.apps/klusterlet unchanged klusterlet.operator.open-cluster-management.io/klusterlet configured Error from server (BadRequest): error when creating \"STDIN\": Secret in version \"v1\" cannot be handled as a Secret: v1.Secret.ObjectMeta: v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data at input byte 1313, error found in #10 byte of ...|dhruy45=\"},\"kind\":\"|..., bigger context ...|tye56u56u568yuo7i67i67i67o556574i\"},\"kind\":\"Secret\",\"metadata\":{\"annotations\":{\"kube|",
"version",
"apply -f import.yaml",
"E0917 03:04:05.874759 1 manifestwork_controller.go:179] Reconcile work test-1-klusterlet-addon-workmgr fails with err: Failed to update work status with err Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:05.874887 1 base_controller.go:231] \"ManifestWorkAgent\" controller failed to sync \"test-1-klusterlet-addon-workmgr\", err: Failed to update work status with err Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:37.245859 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManifestWork: failed to list *v1.ManifestWork: Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks?resourceVersion=607424\": x509: certificate signed by unknown authority",
"I0917 02:27:41.525026 1 event.go:282] Event(v1.ObjectReference{Kind:\"Namespace\", Namespace:\"open-cluster-management-agent\", Name:\"open-cluster-management-agent\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'ManagedClusterAvailableConditionUpdated' update managed cluster \"test-1\" available condition to \"True\", due to \"Managed cluster is available\" E0917 02:58:26.315984 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.CertificateSigningRequest: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority E0917 02:58:26.598343 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\": x509: certificate signed by unknown authority E0917 02:58:27.613963 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: failed to list *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority",
"delete secret -n <cluster_name> <cluster_name>-import",
"delete secret -n <cluster_name> <cluster_name>-import",
"get secret -n <cluster_name> <cluster_name>-import -ojsonpath='{.data.import\\.yaml}' | base64 --decode > import.yaml",
"apply -f import.yaml",
"api-resources --verbs=list --namespaced -o name | grep -E '^secrets|^serviceaccounts|^managedclusteraddons|^roles|^rolebindings|^manifestworks|^leases|^managedclusterinfo|^appliedmanifestworks'|^clusteroauths' | xargs -n 1 oc get --show-kind --ignore-not-found -n <cluster_name>",
"edit <resource_kind> <resource_name> -n <namespace>",
"delete ns <cluster-name>",
"delete secret auto-import-secret -n <cluster-namespace>",
"apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: cinder.csi.openstack.org kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: 'true' name: standard-csi parameters: force-create: 'true'",
"adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME",
"adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME --dest-dir=<SOMENAME> ; tar -cvzf <SOMENAME>.tgz <SOMENAME>",
"There are two ways to access the provisioned PostgreSQL database.",
"exec -it multicluster-global-hub-postgres-0 -c multicluster-global-hub-postgres -n multicluster-global-hub -- psql -U postgres -d hoh Or access the database installed by crunchy operator exec -it USD(kubectl get pods -n multicluster-global-hub -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -c database -n multicluster-global-hub -- psql -U postgres -d hoh -c \"SELECT 1\"",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: multicluster-global-hub-postgres-lb namespace: multicluster-global-hub spec: ports: - name: postgres port: 5432 protocol: TCP targetPort: 5432 selector: name: multicluster-global-hub-postgres type: LoadBalancer EOF",
"Host get svc postgres-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' Password get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"password\" | base64decode}}'",
"patch postgrescluster postgres -n multicluster-global-hub -p '{\"spec\":{\"service\":{\"type\":\"LoadBalancer\"}}}' --type merge",
"Host get svc -n multicluster-global-hub postgres-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' Username get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"user\" | base64decode}}' Password get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"password\" | base64decode}}' Database get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"dbname\" | base64decode}}'",
"pg_dump hoh > hoh.sql",
"pg_dump -h my.host.com -p 5432 -U postgres -F t hoh -f hoh-USD(date +%d-%m-%y_%H-%M).tar",
"psql -h another.host.com -p 5432 -U postgres -d hoh < hoh.sql",
"pg_restore -h another.host.com -p 5432 -U postgres -d hoh hoh-USD(date +%d-%m-%y_%H-%M).tar",
"edit managedcluster <cluster-name>",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60",
"get pod -n <new_cluster_name>",
"logs <new_cluster_name_provision_pod_name> -n <new_cluster_name> -c hive",
"describe clusterdeployments -n <new_cluster_name>",
"No subnets provided for zones",
"get secret grafana-config -n open-cluster-management-observability -o jsonpath=\"{.data.grafana\\.ini}\" | base64 -d | grep dataproxy -A 4",
"[dataproxy] timeout = 300 dial_timeout = 30 keep_alive_seconds = 300",
"get secret/grafana-datasources -n open-cluster-management-observability -o jsonpath=\"{.data.datasources\\.yaml}\" | base64 -d | grep queryTimeout",
"queryTimeout: 300s",
"annotate route grafana -n open-cluster-management-observability --overwrite haproxy.router.openshift.io/timeout=300s",
"% oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true True True 56d cluster1 true True True 16h",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-ready-clusters namespace: default spec: clusterSelector: {} status: decisions: - clusterName: cluster1 clusterNamespace: cluster1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: managedcluster-admin-user-zisis namespace: local-cluster rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - get",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: managedcluster-admin-user-zisis namespace: local-cluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: managedcluster-admin-user-zisis namespace: local-cluster subjects: - kind: User name: zisis apiGroup: rbac.authorization.k8s.io",
"failed to install release: unable to build kubernetes objects from release manifest: unable to recognize \"\": no matches for kind \"Deployment\" in version \"extensions/v1beta1\"",
"error: unable to recognize \"old.yaml\": no matches for kind \"Deployment\" in version \"deployment/v1beta1\"",
"apiVersion: apps/v1 kind: Deployment",
"explain <resource>",
"get klusterlets klusterlet -oyaml",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: deva namespace: ch-obj labels: name: obj-sub spec: type: ObjectBucket pathname: http://ec2-100-26-232-156.compute-1.amazonaws.com:9000/deva sourceNamespaces: - default secretRef: name: dev --- apiVersion: v1 kind: Secret metadata: name: dev namespace: ch-obj labels: name: obj-sub data: AccessKeyID: YWRtaW4= SecretAccessKey: cGFzc3dvcmRhZG1pbg==",
"annotate appsub -n <subscription-namespace> <subscription-name> test=true",
"get pods -n open-cluster-management|grep observability",
"get crd|grep observ",
"multiclusterobservabilities.observability.open-cluster-management.io observabilityaddons.observability.open-cluster-management.io observatoria.core.observatorium.io",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"error: response status code is 500 Internal Server Error, response body is x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"observability-client-ca-certificate\")",
"delete secret observability-controller-open-cluster-management.io-observability-signer-client-cert -n open-cluster-management-addon-observability",
"project open-cluster-management",
"patch search -n open-cluster-management search-v2-operator --type json -p '[{\"op\": \"add\", \"path\": \"/spec/deployments/database/resources\", \"value\": {\"limits\": {\"memory\": \"16Gi\"}, \"requests\": {\"memory\": \"32Mi\", \"cpu\": \"25m\"}}}]'",
"annotate search search-v2-operator search-pause=true",
"edit cm search-postgres -n open-cluster-management",
"postgresql.conf: |- work_mem = '128MB' # Higher values allocate more memory max_parallel_workers_per_gather = '0' # Disables parallel queries shared_buffers = '1GB' # Higher values allocate more memory",
"delete pod search-postgres-xyz search-api-xzy",
"get cm search-postgres -n open-cluster-management -o yaml",
"ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg=\"critical error detected; halting\" err=\"compaction: group 0@5827190780573537664: compact blocks [ /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE]: 2 errors: populate block: add series: write series data: write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device; write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device\"",
"delete pod observability-thanos-compact-0 -n open-cluster-management-observability",
"ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg=\"critical error detected; halting\" err=\"compaction: group 0@15699422364132557315: compact blocks [/var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZQK7TD06J2XWGR5EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZYEZ2DVDQXF1STVEXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HM05APAHXBQSNC0N5EXAMPLE]: populate block: chunk iter: cannot populate chunk 8 from block 01HKZYEZ2DVDQXF1STVEXAMPLE: segment index 0 out of range\"",
"rsh observability-thanos-compact-0 [..] thanos tools bucket verify -r --objstore.config=\"USDOBJSTORE_CONFIG\" --objstore-backup.config=\"USDOBJSTORE_CONFIG\" --id=01HKZYEZ2DVDQXF1STVEXAMPLE",
"thanos tools bucket mark --id \"01HKZYEZ2DVDQXF1STVEXAMPLE\" --objstore.config=\"USDOBJSTORE_CONFIG\" --marker=deletion-mark.json --details=DELETE",
"thanos tools bucket cleanup --objstore.config=\"USDOBJSTORE_CONFIG\"",
"subctl diagnose all --kubeconfig <path-to-kubeconfig-file>",
"subctl diagnose firewall inter-cluster <path-to-local-kubeconfig> <path-to-remote-cluster-kubeconfig>",
"subctl verify --kubecontexts <cluster1>,<cluster2> [flags]",
"message: >- [spec.tls.caCertificate: Invalid value: \"redacted ca certificate data\": failed to parse CA certificate: data does not contain any valid RSA or ECDSA certificates, spec.tls.certificate: Invalid value: \"redacted certificate data\": data does not contain any valid RSA or ECDSA certificates, spec.tls.key: Invalid value: \"\": no key specified]",
"tls: certificate: | {{ print \"{{hub fromSecret \"open-cluster-management\" \"minio-cert\" \"tls.crt\" hub}}\" | base64dec | autoindent }}"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/troubleshooting/troubleshooting |
Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator | Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator 3.1. Prerequisites Before you install the Operator and use it to create a broker deployment, you should consult the Operator deployment notes in Section 2.5, "Operator deployment notes" . 3.2. Installing the Operator using the CLI Note Each Operator release requires that you download the latest AMQ Broker 7.9.4 Operator Installation and Example Files as described below. The procedures in this section show how to use the OpenShift command-line interface (CLI) to install and deploy the latest version of the Operator for AMQ Broker 7.9 in a given OpenShift project. In subsequent procedures, you use this Operator to deploy some broker instances. For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . To learn about upgrading existing Operator-based broker deployments, see Chapter 6, Upgrading an Operator-based broker deployment . 3.2.1. Getting the Operator code This procedure shows how to access and prepare the code you need to install the latest version of the Operator for AMQ Broker 7.9. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.9.4 releases . Ensure that the value of the Version drop-down list is set to 7.9.4 and the Releases tab is selected. to AMQ Broker 7.9.4 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.9.4-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . USD mkdir ~/broker/operator USD mv amq-broker-operator-7.9.4-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: USD cd ~/broker/operator USD unzip amq-broker-operator-7.9.4-ocp-install-examples.zip Switch to the directory that was created when you extracted the archive. For example: USD cd amq-broker-operator-7.9.4-ocp-install-examples Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Specify the project in which you want to install the Operator. You can create a new project or switch to an existing one. Create a new project: USD oc new-project <project_name> Or, switch to an existing project: USD oc project <project_name> Specify a service account to use with the Operator. In the deploy directory of the Operator archive that you extracted, open the service_account.yaml file. Ensure that the kind element is set to ServiceAccount . In the metadata section, assign a custom name to the service account, or use the default name. The default name is amq-broker-operator . Create the service account in your project. USD oc create -f deploy/service_account.yaml Specify a role name for the Operator. Open the role.yaml file. This file specifies the resources that the Operator can use and modify. Ensure that the kind element is set to Role . In the metadata section, assign a custom name to the role, or use the default name. The default name is amq-broker-operator . Create the role in your project. USD oc create -f deploy/role.yaml Specify a role binding for the Operator. The role binding binds the previously-created service account to the Operator role, based on the names you specified. Open the role_binding.yaml file. Ensure that the name values for ServiceAccount and Role match those specified in the service_account.yaml and role.yaml files. For example: metadata: name: amq-broker-operator subjects: kind: ServiceAccount name: amq-broker-operator roleRef: kind: Role name: amq-broker-operator Create the role binding in your project. USD oc create -f deploy/role_binding.yaml In the procedure that follows, you deploy the Operator in your project. 3.2.2. Deploying the Operator using the CLI The procedure in this section shows how to use the OpenShift command-line interface (CLI) to deploy the latest version of the Operator for AMQ Broker 7.9 in your OpenShift project. Prerequisites You must have already prepared your OpenShift project for the Operator deployment. See Section 3.2.1, "Getting the Operator code" . Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that they are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your Custom Resource), you need to have two PVs available. By default, each broker instance requires storage of 2 GiB. If you specify persistenceEnabled=false in your Custom Resource, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker Pods, any existing data is lost. For more information about provisioning persistent storage, see: Understanding persistent storage (OpenShift Container Platform 4.5) Procedure In the OpenShift command-line interface (CLI), log in to OpenShift as a cluster administrator. For example: USD oc login -u system:admin Switch to the project that you previously prepared for the Operator deployment. For example: USD oc project <project_name> Switch to the directory that was created when you previously extracted the Operator installation archive. For example: USD cd ~/broker/operator/amq-broker-operator-7.9.4-ocp-install-examples Deploy the CRDs that are included with the Operator. You must install the CRDs in your OpenShift cluster before deploying and starting the Operator. Deploy the main broker CRD. USD oc create -f deploy/crds/broker_activemqartemis_crd.yaml Deploy the address CRD. USD oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml Deploy the scaledown controller CRD. USD oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml Link the pull secret associated with the account used for authentication in the Red Hat Ecosystem Catalog with the default , deployer , and builder service accounts for your OpenShift project. USD oc secrets link --for=pull default <secret_name> USD oc secrets link --for=pull deployer <secret_name> USD oc secrets link --for=pull builder <secret_name> In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Ensure that the value of the spec.containers.image property corresponds to version 7.9.4-opr-3 of the Operator, as shown below. spec: template: spec: containers: #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.9 image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:4045170b583f76cdfbe123fd794ed4d175de0c2a76bdb7bf8762b3e35f0eb5b8 Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Determine which namespaces are watched by the Operator by optionally editing the WATCH_NAMESPACE section of the operator.yaml file. To deploy the Operator to watch the active namespace, do not edit the section: - name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace To deploy the Operator to watch all namespaces: - name: WATCH_NAMESPACE value: '*' To deploy the Operator to watch multiple namespaces, for example namespace1 and namespace2 : - name: WATCH_NAMESPACE value: 'namespace1,namespace2' Note If you previously deployed brokers using an earlier version of the Operator, and you want deploy the Operator to watch many namespaces, see Before you upgrade . Deploy the Operator. USD oc create -f deploy/operator.yaml In your OpenShift project, the Operator starts in a new Pod. In the OpenShift Container Platform web console, the information on the Events tab of the Operator Pod confirms that OpenShift has deployed the Operator image that you specified, has assigned a new container to a node in your OpenShift cluster, and has started the new container. In addition, if you click the Logs tab within the Pod, the output should include lines resembling the following: The preceding output confirms that the newly-deployed Operator is communicating with Kubernetes, that the controllers for the broker and addressing are running, and that these controllers have started some workers. Note It is recommended that you deploy only a single instance of the AMQ Broker Operator in a given OpenShift project. Setting the spec.replicas property of your Operator deployment to a value greater than 1 , or deploying the Operator more than once in the same project is not recommended. Additional resources For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . 3.3. Installing the Operator using OperatorHub 3.3.1. Overview of the Operator Lifecycle Manager In OpenShift Container Platform 4.5 and later, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the lifecycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes-native applications (Operators) in an effective, automated, and scalable way. The OLM runs by default in OpenShift Container Platform 4.5 and later, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators using the OLM. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments. When you have deployed the Operator, you can use Custom Resource (CR) instances to create broker deployments such as standalone and clustered brokers. 3.3.2. Deploying the Operator from OperatorHub This procedure shows how to use OperatorHub to deploy the latest version of the Operator for AMQ Broker to a specified OpenShift project. Important Deploying the Operator using OperatorHub requires cluster administrator privileges. Prerequisites The Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator must be available in OperatorHub. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. In left navigation menu, click Operators OperatorHub . On the Project drop-down menu at the top of the OperatorHub page, select the project in which you want to deploy the Operator. On the OperatorHub page, use the Filter by keyword... box to find the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. Note In OperatorHub, you might find more than one Operator than includes AMQ Broker in its name. Ensure that you click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. When you click this Operator, review the information pane that opens. For AMQ Broker 7.9, the latest minor version tag of this Operator is 7.9.4-opr-3 . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. On the dialog box that appears, click Install . On the Install Operator page: Under Update Channel , specify the channel used to track and receive updates for the Operator by selecting 7.x from the following radio buttons: 7.x - This channel will update to 7.10 when available. 7.8.x - This is the Long Term Support (LTS) channel. Under Installation Mode , choose which namespaces the Operator watches: A specific namespace on the cluster - The Operator is installed in that namespace and only monitors that namespace for CR changes. All namespaces - The Operator monitors all namespaces for CR changes. Note If you previously deployed brokers using an earlier version of the Operator, and you want deploy the Operator to watch many namespaces, see Before you upgrade . From the Installed Namespace drop-down menu, select the project in which you want to install the Operator. Under Approval Strategy , ensure that the radio button entitled Automatic is selected. This option specifies that updates to the Operator do not require manual approval for installation to take place. Click Install . When the Operator installation is complete, the Installed Operators page opens. You should see that the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator is installed in the project namespace that you specified. Additional resources To learn how to create a broker deployment in a project that has the Operator for AMQ Broker installed, see Section 3.4.1, "Deploying a basic broker instance" . 3.4. Creating Operator-based broker deployments 3.4.1. Deploying a basic broker instance The following procedure shows how to use a Custom Resource (CR) instance to create a basic broker deployment. Note While you can create more than one broker deployment in a given OpenShift project by deploying multiple Custom Resource (CR) instances, typically, you create a single broker deployment in a project, and then deploy multiple CR instances for addresses. Red Hat recommends you create broker deployments in separate projects. In AMQ Broker 7.9, if you want to configure the following items, you must add the appropriate configuration to the main broker CR instance before deploying the CR for the first time. The size of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage Limits and requests for memory and CPU for each broker in a deployment Prerequisites You must have already installed the AMQ Broker Operator. To use the OpenShift command-line interface (CLI) to install the AMQ Broker Operator, see Section 3.2, "Installing the Operator using the CLI" . To use the OperatorHub graphical interface to install the AMQ Broker Operator, see Section 3.3, "Installing the Operator using OperatorHub" . You should understand how the Operator chooses a broker container image to use for your broker deployment. For more information, see Section 2.4, "How the Operator chooses container images" . Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . Procedure When you have successfully installed the Operator, the Operator is running and listening for changes related to your CRs. This example procedure shows how to use a CR instance to deploy a basic broker in your project. Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . Note The broker_activemqartemis_cr.yaml sample CR uses a naming convention of ex-aao . This naming convention denotes that the CR is an example resource for the AMQ Broker Operator . AMQ Broker is based on the ActiveMQ Artemis project. When you deploy this sample CR, the resulting StatefulSet uses the name ex-aao-ss . Furthermore, broker Pods in the deployment are directly based on the StatefulSet name, for example, ex-aao-ss-0 , ex-aao-ss-1 , and so on. The application name in the CR appears in the deployment as a label on the StatefulSet. You might use this label in a Pod selector, for example. The size property specifies the number of brokers to deploy. A value of 2 or greater specifies a clustered broker deployment. However, to deploy a single broker instance, ensure that the value is set to 1 . Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . In the OpenShift Container Platform web console, click Workloads StatefulSets . You see a new StatefulSet called ex-aao-ss . Click the ex-aao-ss StatefulSet. You see that there is one Pod, corresponding to the single broker that you defined in the CR. Within the StatefulSet, click the Pods tab. Click the ex-aao-ss Pod. On the Events tab of the running Pod, you see that the broker container has started. The Logs tab shows that the broker itself is running. To test that the broker is running normally, access a shell on the broker Pod to send some test messages. Using the OpenShift Container Platform web console: Click Workloads Pods . Click the ex-aao-ss Pod. Click the Terminal tab. Using the OpenShift command-line interface: Get the Pod names and internal IP addresses for your project. Access the shell for the broker Pod. From the shell, use the artemis command to send some test messages. Specify the internal IP address of the broker Pod in the URL. For example: The preceding command automatically creates a queue called demoQueue on the broker and sends a default quantity of 1000 messages to the queue. You should see output that resembles the following: Additional resources For a complete configuration reference for the main broker Custom Resource (CR), see Section 8.1, "Custom Resource configuration reference" . To learn how to connect a running broker to AMQ Management Console, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment . 3.4.2. Deploying clustered brokers If there are two or more broker Pods running in your project, the Pods automatically form a broker cluster. A clustered configuration enables brokers to connect to each other and redistribute messages as needed, for load balancing. The following procedure shows you how to deploy clustered brokers. By default, the brokers in this deployment use on demand load balancing, meaning that brokers will forward messages only to other brokers that have matching consumers. Prerequisites A basic broker instance is already deployed. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Open the CR file that you used for your basic broker deployment. For a clustered deployment, ensure that the value of deploymentPlan.size is 2 or greater. For example: apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 4 image: placeholder ... Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Save the modified CR file. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you previously created your basic broker deployment. Switch to the project in which you previously created your basic broker deployment. At the command line, apply the change: USD oc apply -f <path/to/custom_resource_instance> .yaml In the OpenShift Container Platform web console, additional broker Pods starts in your project, according to the number specified in your CR. By default, the brokers running in the project are clustered. Open the Logs tab of each Pod. The logs show that OpenShift has established a cluster connection bridge on each broker. Specifically, the log output includes a line like the following: 3.4.3. Applying Custom Resource changes to running broker deployments The following are some important things to note about applying Custom Resource (CR) changes to running broker deployments: You cannot dynamically update the persistenceEnabled attribute in your CR. To change this attribute, scale your cluster down to zero brokers. Delete the existing CR. Then, recreate and redeploy the CR with your changes, also specifying a deployment size. The value of the deploymentPlan.size attribute in your CR overrides any change you make to size of your broker deployment via the oc scale command. For example, suppose you use oc scale to change the size of a deployment from three brokers to two, but the value of deploymentPlan.size in your CR is still 3 . In this case, OpenShift initially scales the deployment down to two brokers. However, when the scaledown operation is complete, the Operator restores the deployment to three brokers, as specified in the CR. As described in Section 3.2.2, "Deploying the Operator using the CLI" , if you create a broker deployment with persistent storage (that is, by setting persistenceEnabled=true in your CR), you might need to provision Persistent Volumes (PVs) for the AMQ Broker Operator to claim for your broker Pods. If you scale down the size of your broker deployment, the Operator releases any PVs that it previously claimed for the broker Pods that are now shut down. However, if you remove your broker deployment by deleting your CR, AMQ Broker Operator does not release Persistent Volume Claims (PVCs) for any broker Pods that are still in the deployment when you remove it. In addition, these unreleased PVs are unavailable to any new deployment. In this case, you need to manually release the volumes. For more information, see Release a persistent volume in the OpenShift documentation. In AMQ Broker 7.9, if you want to configure the following items, you must add the appropriate configuration to the main CR instance before deploying the CR for the first time. The size of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage Limits and requests for memory and CPU for each broker in a deployment During an active scaling event, any further changes that you apply are queued by the Operator and executed only when scaling is complete. For example, suppose that you scale the size of your deployment down from four brokers to one. Then, while scaledown is taking place, you also change the values of the broker administrator user name and password. In this case, the Operator queues the user name and password changes until the deployment is running with one active broker. All CR changes - apart from changing the size of your deployment, or changing the value of the expose attribute for acceptors, connectors, or the console - cause existing brokers to be restarted. If you have multiple brokers in your deployment, only one broker restarts at a time. | [
"mkdir ~/broker/operator mv amq-broker-operator-7.9.4-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.9.4-ocp-install-examples.zip",
"cd amq-broker-operator-7.9.4-ocp-install-examples",
"oc login -u system:admin",
"oc new-project <project_name>",
"oc project <project_name>",
"oc create -f deploy/service_account.yaml",
"oc create -f deploy/role.yaml",
"metadata: name: amq-broker-operator subjects: kind: ServiceAccount name: amq-broker-operator roleRef: kind: Role name: amq-broker-operator",
"oc create -f deploy/role_binding.yaml",
"oc login -u system:admin",
"oc project <project_name>",
"cd ~/broker/operator/amq-broker-operator-7.9.4-ocp-install-examples",
"oc create -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml",
"oc secrets link --for=pull default <secret_name> oc secrets link --for=pull deployer <secret_name> oc secrets link --for=pull builder <secret_name>",
"spec: template: spec: containers: #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.9 image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:4045170b583f76cdfbe123fd794ed4d175de0c2a76bdb7bf8762b3e35f0eb5b8",
"- name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"- name: WATCH_NAMESPACE value: '*'",
"- name: WATCH_NAMESPACE value: 'namespace1,namespace2'",
"oc create -f deploy/operator.yaml",
"{\"level\":\"info\",\"ts\":1553619035.8302743,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting Controller\",\"controller\":\"activemqartemisaddress-controller\"} {\"level\":\"info\",\"ts\":1553619035.830541,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting Controller\",\"controller\":\"activemqartemis-controller\"} {\"level\":\"info\",\"ts\":1553619035.9306898,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting workers\",\"controller\":\"activemqartemisaddress-controller\",\"worker count\":1} {\"level\":\"info\",\"ts\":1553619035.9311671,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting workers\",\"controller\":\"activemqartemis-controller\",\"worker count\":1}",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"oc project <project_name>",
"oc create -f <path/to/custom_resource_instance> .yaml",
"oc get pods -o wide NAME STATUS IP amq-broker-operator-54d996c Running 10.129.2.14 ex-aao-ss-0 Running 10.129.2.15",
"oc rsh ex-aao-ss-0",
"sh-4.2USD ./amq-broker/bin/artemis producer --url tcp://10.129.2.15:61616 --destination queue://demoQueue",
"Connection brokerURL = tcp://10.129.2.15:61616 Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 3 s Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 3492 milli seconds",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 4 image: placeholder",
"oc login -u <user> -p <password> --server= <host:port>",
"oc project <project_name>",
"oc apply -f <path/to/custom_resource_instance> .yaml",
"targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6f13fb88"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_amq_broker_on_openshift/deploying-broker-on-ocp-using-operator_broker-ocp |
Chapter 10. Message Transformation | Chapter 10. Message Transformation Abstract The message transformation patterns describe how to modify the contents of messages for various purposes. 10.1. Content Enricher Overview The content enricher pattern describes a scenario where the message destination requires more data than is present in the original message. In this case, you would use a message translator, an arbitrary processor in the routing logic, or a content enricher method to pull in the extra data from an external resource. Figure 10.1. Content Enricher Pattern Alternatives for enriching content Apache Camel supports several ways to enrich content: Message translator with arbitrary processor in the routing logic The enrich() method obtains additional data from the resource by sending a copy of the current exchange to a producer endpoint and then using the data in the resulting reply. The exchange created by the enricher is always an InOut exchange. The pollEnrich() method obtains additional data by polling a consumer endpoint for data. Effectively, the consumer endpoint from the main route and the consumer endpoint in pollEnrich() operation are coupled. That is, an incoming message on the initial consumer in the route triggers the pollEnrich() method on the consumer to be polled. Note The enrich() and pollEnrich() methods support dynamic endpoint URIs. You can compute URIs by specifying an expression that enables you to obtain values from the current exchange. For example, you can poll a file with a name that is computed from the data exchange. This behavior was introduced in Camel 2.16. This change breaks the XML DSL and enables you to migrate easily. The Java DSL stays backwards compatible. Using message translators and processors to enrich content Camel provides fluent builders for creating routing and mediation rules using a type-safe IDE-friendly way that provides smart completion and is refactoring safe. When you are testing distributed systems it is a very common requirement to have to stub out certain external systems so that you can test other parts of the system until a specific system is available or written. One way to do this is to use some kind of template system to generate responses to requests by generating a dynamic message that has a mostly-static body. Another way to use templates is to consume a message from one destination, transform it with something like Velocity or XQuery , and then send it to another destination. The following example shows this for an InOnly (one way) message: Suppose you want to use InOut (request-reply) messaging to process requests on the My.Queue queue on ActiveMQ. You want a template-generated response that goes to a JMSReplyTo destination. The following example shows how to do this: The following simple example shows how to use DSL to transform the message body: The following example uses explicit Java code to add a processor: The example uses bean integration to enable the use of any bean to act as the transformer: The following example shows a Spring XML implementation: Using the enrich() method to enrich content The content enricher ( enrich ) retrieves additional data from a resource endpoint in order to enrich an incoming message (contained in the orginal exchange ). An aggregation strategy combines the original exchange and the resource exchange . The first parameter of the AggregationStrategy.aggregate(Exchange, Exchange) method corresponds to the the original exchange, and the second parameter corresponds to the resource exchange. The results from the resource endpoint are stored in the resource exchange's Out message. Here is a sample template for implementing your own aggregation strategy class: Using this template, the original exchange can have any exchange pattern. The resource exchange created by the enricher is always an InOut exchange. Spring XML enrich example The preceding example can also be implemented in Spring XML: Default aggregation strategy when enriching content The aggregation strategy is optional. If you do not provide it, Apache Camel will use the body obtained from the resource by default. For example: In the preceding route, the message sent to the direct:result endpoint contains the output from the direct:resource , because this example does not use any custom aggregation. In XML DSL, just omit the strategyRef attribute, as follows: Options supported by the enrich() method The enrich DSL command supports the following options: Name Default Value Description expression None Starting with Camel 2.16, this option is required. Specify an expression for configuring the URI of the external service to enrich from. You can use the Simple expression language, the Constant expression language, or any other language that can dynamically compute the URI from values in the current exchange. uri These options have been removed. Specify the expression option instead. In Camel 2.15 and earlier, specification of the uri option or the ref option was required. Each option specified the endpoint URI for the external service to enrich from. ref Refers to the endpoint for the external service to enrich from. You must use either uri or ref . strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service into a single outgoing message. By default, Camel uses the reply from the external service as the outgoing message. You can use a POJO as the AggregationStrategy . For additional information, see the documentation for the Aggregate pattern. strategyMethodName When using POJOs as the AggregationStrategy , specify this option to explicitly declare the name of the aggregation method. For details, see the Aggregate pattern. strategyMethodAllowNull false The default behavior is that the aggregate method is not used if there is no data to enrich. If this option is true then null values are used as the oldExchange when there is no data to enrich and you are using POJOs as the AggregationStrategy . For more information, see the Aggregate pattern. aggregateOnException false The default behavior is that the aggregate method is not used if there was an exception thrown while trying to retrieve the data to enrich from the resource. Setting this option to true allows end users to control what to do if there was an exception in the aggregate method. For example, it is possible to suppress the exception or set a custom message body shareUntOfWork false Starting with Camel 2.16, the default behavior is that the enrich operation does not share the unit of work between the parent exchange and the resource exchange. This means that the resource exchange has its own individual unit of work. For more information, see the documentation for the Splitter pattern. cacheSize 1000 Starting with Camel 2.16, specify this option to configure the cache size for the ProducerCache , which caches producers for reuse in the enrich operation. To turn off this cache, set the cacheSize option to -1 . ignoreInvalidEndpoint false Starting with Camel 2.16, this option indicates whether or not to ignore an endpoint URI that cannot be resolved. The default behavior is that Camel throws an exception that identifies the invalid endpoint URI. Specifying an aggregation strategy when using the enrich() method The enrich() method retrieves additional data from a resource endpoint to enrich an incoming message, which is contained in the original exchange. You can use an aggregation strategy to combine the original exchange and the resource exchange. The first parameter of the AggregationStrategy.aggregate(Exchange, Exchange) method corresponds to the original exchange. The second parameter corresponds to the resource exchange. The results from the resource endpoint are stored in the resource exchange's Out message. For example: The following code is a template for implementing an aggregation strategy. In an implementation that uses this template, the original exchange can be any message exchange pattern. The resource exchange created by the enricher is always an InOut message exchange pattern. The following example shows the use of the Spring XML DSL to implement an aggregation strategy: Using dynamic URIs with enrich() Starting with Camel 2.16, the enrich() and pollEnrich() methods support the use of dynamic URIs that are computed based on information from the current exchange. For example, to enrich from an HTTP endpoint where the header with the orderId key is used as part of the content path of the HTTP URL, you can do something like this: Following is the same example in XML DSL: Using the pollEnrich() method to enrich content The pollEnrich command treats the resource endpoint as a consumer . Instead of sending an exchange to the resource endpoint, it polls the endpoint. By default, the poll returns immediately, if there is no exchange available from the resource endpoint. For example, the following route reads a file whose name is extracted from the header of an incoming JMS message: You can limit the time to wait for the file to be ready. The following example shows a maximum wait of 20 seconds: You can also specify an aggregation strategy for pollEnrich() , for example: The pollEnrich() method supports consumers that are configured with consumer.bridgeErrorHandler=true . This lets any exceptions from the poll propagate to the route error handler, which could, for example, retry the poll. Note Support for consumer.bridgeErrorHandler=true is new in Camel 2.18. This behavior is not supported in Camel 2.17. The resource exchange passed to the aggregation strategy's aggregate() method might be null if the poll times out before an exchange is received. Polling methods used by pollEnrich() The pollEnrich() method polls the consumer endpoint by calling one of the following polling methods: receiveNoWait() (This is the default.) receive() receive(long timeout) The pollEnrich() command's timeout argument (specified in milliseconds) determines which method to call, as follows: When the timeout is 0 or not specified, pollEnrich() calls receiveNoWait . When the timeout is negative, pollEnrich() calls receive . Otherwise, pollEnrich() calls receive(timeout) . If there is no data then the newExchange in the aggregation strategy is null. Examples of using the pollEnrich() method The following example shows enrichment of the message by loading the content from the inbox/data.txt file: Following is the same example in XML DSL: If the specified file does not exist then the message is empty. You can specify a timeout to wait (potentially forever) until a file exists or to wait up to a particular length of time. In the following example, the command waits no more than 5 seconds: Using dynamic URIs with pollEnrich() Starting with Camel 2.16, the enrich() and pollEnrich() methods support the use of dynamic URIs that are computed based on information from the current exchange. For example, to poll enrich from an endpoint that uses a header to indicate a SEDA queue name, you can do something like this: Following is the same example in XML DSL: Options supported by the pollEnrich() method The pollEnrich DSL command supports the following options: Name Default Value Description expression None Starting with Camel 2.16, this option is required. Specify an expression for configuring the URI of the external service to enrich from. You can use the Simple expression language, the Constant expression language, or any other language that can dynamically compute the URI from values in the current exchange. uri These options have been removed. Specify the expression option instead. In Camel 2.15 and earlier, specification of the uri option or the ref option was required. Each option specified the endpoint URI for the external service to enrich from. ref Refers to the endpoint for the external service to enrich from. You must use either uri or ref . strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service into a single outgoing message. By default, Camel uses the reply from the external service as the outgoing message. You can use a POJO as the AggregationStrategy . For additional information, see the documentation for the Aggregate pattern. strategyMethodName When using POJOs as the AggregationStrategy , specify this option to explicitly declare the name of the aggregation method. For details, see the Aggregate pattern. strategyMethodAllowNull false The default behavior is that the aggregate method is not used if there is no data to enrich. If this option is true then null values are used as the oldExchange when there is no data to enrich and you are using POJOs as the AggregationStrategy . For more information, see the Aggregate pattern. timeout -1 The maximum length of time, in milliseconds, to wait for a response when polling from the external service. The default behavior is that the pollEnrich() method calls the receive() method. Because receive() can block until there is a message available, the recommendation is to always specify a timeout. aggregateOnException false The default behavior is that the aggregate method is not used if there was an exception thrown while trying to retrieve the data to enrich from the resource. Setting this option to true allows end users to control what to do if there was an exception in the aggregate method. For example, it is possible to suppress the exception or set a custom message body cacheSize 1000 Specify this option to configure the cache size for the ConsumerCache , which caches consumers for reuse in the pollEnrich() operation. To turn off this cache, set the cacheSize option to -1 . ignoreInvalidEndpoint false Indicates whether or not to ignore an endpoint URI that cannot be resolved. The default behavior is that Camel throws an exception that identifies the invalid endpoint URI. 10.2. Content Filter Overview The content filter pattern describes a scenario where you need to filter out extraneous content from a message before delivering it to its intended recipient. For example, you might employ a content filter to strip out confidential information from a message. Figure 10.2. Content Filter Pattern A common way to filter messages is to use an expression in the DSL, written in one of the supported scripting languages (for example, XSLT, XQuery or JoSQL). Implementing a content filter A content filter is essentially an application of a message processing technique for a particular purpose. To implement a content filter, you can employ any of the following message processing techniques: Message translator - see Section 5.6, "Message Translator" . Processors - see Chapter 35, Implementing a Processor . Bean integration . XML configuration example The following example shows how to configure the same route in XML: Using an XPath filter You can also use XPath to filter out part of the message you are interested in: 10.3. Normalizer Overview The normalizer pattern is used to process messages that are semantically equivalent, but arrive in different formats. The normalizer transforms the incoming messages into a common format. In Apache Camel, you can implement the normalizer pattern by combining a Section 8.1, "Content-Based Router" , which detects the incoming message's format, with a collection of different Section 5.6, "Message Translator" , which transform the different incoming formats into a common format. Figure 10.3. Normalizer Pattern Java DSL example This example shows a Message Normalizer that converts two types of XML messages into a common format. Messages in this common format are then filtered. Using the Fluent Builders In this case we're using a Java bean as the normalizer. The class looks like this XML configuration example The same example in the XML DSL 10.4. Claim Check EIP Claim Check EIP The claim check EIP pattern, shown in Figure 10.4, "Claim Check Pattern" , allows you to replace the message content with a claim check (a unique key). Use the claim check EIP pattern to retrieve the message content at a later time. You can store the message content temporarily in a persistent store like a database or file system. This pattern is useful when the message content is very large (and, expensive to send around) and not all components require all the information. It can also be useful when you cannot trust the information with an outside party. In this case, use the Claim Check to hide the sensitive portions of data. The Camel implementation of the EIP pattern stores the message content temporarily in an internal memory store. Figure 10.4. Claim Check Pattern 10.4.1. Claim Check EIP Options The Claim Check EIP supports the options listed in the following table: Name Description Default Type operation Need to use the claim check operation. It supports the following operations: * Get - Gets (does not remove) the claim check by the given key. * GetAndRemove - Gets and removes the claim check by the given key. * Set - Sets a new claim check with the given key. It will be overridden if a key already exists. * Push - Sets a new claim check on the stack (does not use the key). * Pop - Gets the latest claim check from the stack (does not use the key). When using the Get , GetAndRemove , or Set operation you must specify a key. These operations will then store and retrieve the data using the key. Use these operations to store multiple data in different keys. However, the push and pop operations do not use a key but store the data in a stack structure. ClaimCheckOperation key To use a specific key for claim check-id. String filter Specify a filter to control the data that you want to merge back from the claim check repository. String strategyRef To use a custom AggregationStrategy instead of the default implementation. You cannot use both custom aggregation strategy and configure data at the same time. String Filter Option Use the Filter option to define the data to merge back when using the Get or the Pop operations. Merge the data back by using an AggregationStrategy . The default strategy uses the filter option to easily specify the data to be merged back. The filter option takes a String value with the following syntax: body : To aggregate the message body attachments : To aggregate all the message attachments headers : To aggregate all the message headers header:pattern : To aggregate all the message headers that match the pattern The pattern rule supports wildcard and regular expression. Wildcard match (pattern ends with a * and the name starts with the pattern) Regular expression match To specify multiple rules, separate them by commas ( , ). Following are the basic filter examples to include the message body and all headers starting with foo : To merge the message body only: body To merge the message attachments only: attachments To merge headers only: headers To merge a header name foo only: header:foo If you specify the filter rule as empty or as wildcard, you can merge everything. For more information, see Filter what data to merge back . Note When you merge the data back, the system overwrites any existing data. Also, it stores the existing data. 10.4.2. Filter Option with Include and Exclude Pattern Following is the syntax that supports the prefixes that you can use to specify include,exclude, or remove options. + : to include (which is the default mode) - : to exclude (exclude takes precedence over include) -- : to remove (remove takes precedence) For example: To skip the message body and merge everything else, use- -body To skip the message header foo and merge everything else, use- -header:foo You can also instruct the system to remove headers when merging the data. For example, to remove all headers starting with bar, use- --headers:bar* . Note Do not use both the include (+) and exclude (-) header:pattern at the same time. 10.4.3. Java Examples The following example shows the Push and Pop operations in action: Following is an example of using the Get and Set operations. The example, uses the foo key. Note You can get the same data twice using the Get operation because it does not remove the data. However, if you want to get the data only once, use GetAndRemove operation. The following example shows how to use the filter option where you only want to get back header as foo or bar . 10.4.4. XML Examples The following example shows the Push and Pop operations in action. Following is an example of using the Get and Set operations. The example, uses the foo key. Note You can get the same data twice by using the Get operation because it does not remove the data. However, if you want to get the data once, you can use GetAndRemove operation. The following example shows how to use the filter option to get back the header as foo or bar . 10.5. Sort Sort The sort pattern is used to sort the contents of a message body, assuming that the message body contains a list of items that can be sorted. By default, the contents of the message are sorted using a default comparator that handles numeric values or strings. You can provide your own comparator and you can specify an expression that returns the list to be sorted (the expression must be convertible to java.util.List ). Java DSL example The following example generates the list of items to sort by tokenizing on the line break character: You can pass in your own comparator as the second argument to sort() : XML configuration example You can configure the same routes in Spring XML. The following example generates the list of items to sort by tokenizing on the line break character: And to use a custom comparator, you can reference it as a Spring bean: Besides <simple> , you can supply an expression using any language you like, so long as it returns a list. Options The sort DSL command supports the following options: Name Default Value Description comparatorRef Refers to a custom java.util.Comparator to use for sorting the message body. Camel will by default use a comparator which does a A..Z sorting. 10.6. Transformer Transformer performs declarative transformation of the message according to the declared Input Type and/or Output Type on a route definition. The default camel message implements DataTypeAware , which holds the message type represented by DataType . 10.6.1. How the Transformer works? The route definition declares the Input Type and/or Output Type . If the Input Type and/or Output Type are different from the message type at runtime, the camel internal processor looks for a Transformer. The Transformer transforms the current message type to the expected message type. Once the message is transformed successfully or if the message is already in expected type, then the message data type is updated. 10.6.1.1. Data type format The format for the data type is scheme:name , where scheme is the type of data model such as java , xml or json and name is the data type name. Note If you only specify scheme then it matches all the data types with that scheme. 10.6.1.2. Supported Transformers Transformer Description Data Format Transformer Transforms by using Data Format Endpoint Transformer Transforms by using Endpoint Custom Transformer Transforms by using custom transformer class. 10.6.1.3. Common Options All transformers have the following common options to specify the supported data type by the transformer. Important Either scheme or both fromType and toType must be specified. Name Description scheme Type of data model such as xml or json . For example, if xml is specified, the transformer is applied for all java -> xml and xml -> java transformation. fromType Data type to transform from. toType Data type to transform to. 10.6.1.4. DataFormat Transformer Options Name Description type Data Format type ref Reference to the Data Format ID An example to specify bindy DataFormat type: Java DSL: BindyDataFormat bindy = new BindyDataFormat(); bindy.setType(BindyType.Csv); bindy.setClassType(com.example.Order.class); transformer() .fromType(com.example.Order.class) .toType("csv:CSVOrder") .withDataFormat(bindy); XML DSL: <dataFormatTransformer fromType="java:com.example.Order" toType="csv:CSVOrder"> <bindy id="csvdf" type="Csv" classType="com.example.Order"/> </dataFormatTransformer> 10.6.2. Endpoint Transformer Options Name Description ref Reference to the Endpoint ID uri Endpoint URI An example to specify endpoint URI in Java DSL: transformer() .fromType("xml") .toType("json") .withUri("dozer:myDozer?mappingFile=myMapping.xml..."); An example to specify endpoint ref in XML DSL: <transformers> <endpointTransformer ref="myDozerEndpoint" fromType="xml" toType="json"/> </transformers> 10.6.3. Custom Transformer Options Note Transformer must be a subclass of org.apache.camel.spi.Transformer Name Description ref Reference to the custom Transformer bean ID className Fully qualified class name of the custom Transformer class An example to specify custom Transformer class: Java DSL: transformer() .fromType("xml") .toType("json") .withJava(com.example.MyCustomTransformer.class); XML DSL: <transformers> <customTransformer className="com.example.MyCustomTransformer" fromType="xml" toType="json"/> </transformers> 10.6.4. Transformer Example This example is in two parts, the first part declares the Endpoint Transformer which transforms the message. The second part shows how the transformer is applied to a route. 10.6.4.1. Part I Declares the Endpoint Transformer which uses xslt component to transform from xml:ABCOrder to xml:XYZOrder . Java DSL: transformer() .fromType("xml:ABCOrder") .toType("xml:XYZOrder") .withUri("xslt:transform.xsl"); XML DSL: <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <transformers> <endpointTransformer uri="xslt:transform.xsl" fromType="xml:ABCOrder" toType="xml:XYZOrder"/> </transformers> .... </camelContext> 10.6.4.2. Part II The above transformer is applied to the following route definition when direct:abc endpoint sends the message to direct:xyz : Java DSL: from("direct:abc") .inputType("xml:ABCOrder") .to("direct:xyz"); from("direct:xyz") .inputType("xml:XYZOrder") .to("somewhere:else"); XML DSL: <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:abc"/> <inputType urn="xml:ABCOrder"/> <to uri="direct:xyz"/> </route> <route> <from uri="direct:xyz"/> <inputType urn="xml:XYZOrder"/> <to uri="somewhere:else"/> </route> </camelContext> 10.7. Validator Validator performs declarative validation of the message according to the declared Input Type and/or Output Type on a route definition which declares the expected message type. Note The validation is performed only if the validate attribute on the type declaration is true. If the validate attribute is true on an Input Type and/or Output Type declaration, camel internal processor looks for a corresponding Validator from the registry. 10.7.1. Data type format The format for the data type is scheme:name , where scheme is the type of data model such as java , xml , or json and name is the data type name. 10.7.2. Supported Validators Validator Description Predicate Validator Validate by using Expression or Predicate Endpoint Validator Validate by forwarding to the Endpoint to be used with the validation component such as Validation Component or Bean Validation Component. Custom Validator Validate using custom validator class. Validator must be a subclass of org.apache.camel.spi.Validator 10.7.3. Common Option All validators must include the type option that specifies the Data type to validate. 10.7.4. Predicate Validator Option Name Description expression Expression or Predicate to use for validation. An example that specifies a validation predicate: Java DSL: validator() .type("csv:CSVOrder") .withExpression(bodyAs(String.class).contains("{name:XOrder}")); XML DSL: <predicateValidator Type="csv:CSVOrder"> <simple>USD{body} contains 'name:XOrder'</simple> </predicateValidator> 10.7.5. Endpoint Validator Options Name Description ref Reference to the Endpoint ID. uri Endpoint URI. An example that specifies endpoint URI in Java DSL: validator() .type("xml") .withUri("validator:xsd/schema.xsd"); An example that specifies endpoint ref in XML DSL: <validators> <endpointValidator uri="validator:xsd/schema.xsd" type="xml"/> </validators> Note The Endpoint Validator forwards the message to the specified endpoint. In above example, camel forwards the message to the validator: endpoint, which is a Validation Component . You can also use a different validation component, such as Bean Validation Component. 10.7.6. Custom Validator Options Note The Validator must be a subclass of org.apache.camel.spi.Validator Name Description ref Reference to the custom Validator bean ID. className Fully qualified class name of the custom Validator class. An example that specifies custom Validator class: Java DSL: validator() .type("json") .withJava(com.example.MyCustomValidator.class); XML DSL: <validators> <customValidator className="com.example.MyCustomValidator" type="json"/> </validators> 10.7.7. Validator Examples This example is in two parts, the first part declares the Endpoint Validator which validates the message. The second part shows how the validator is applied to a route. 10.7.7.1. Part I Declares the Endpoint Validator which uses validator component to validate from xml:ABCOrder . Java DSL: validator() .type("xml:ABCOrder") .withUri("validator:xsd/schema.xsd"); XML DSL: <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <validators> <endpointValidator uri="validator:xsd/schema.xsd" type="xml:ABCOrder"/> </validators> </camelContext> 10.7.7.2. Part II The above validator is applied to the following route definition when direct:abc endpoint receives the message. Note The inputTypeWithValidate is used instead of inputType in Java DSL, and the validate attribute on the inputType declaration is set to true in XML DSL: Java DSL: from("direct:abc") .inputTypeWithValidate("xml:ABCOrder") .log("USD{body}"); XML DSL: <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:abc"/> <inputType urn="xml:ABCOrder" validate="true"/> <log message="USD{body}"/> </route> </camelContext> 10.8. Validate Overview The validate pattern provides a convenient syntax to check whether the content of a message is valid. The validate DSL command takes a predicate expression as its sole argument: if the predicate evaluates to true , the route continues processing normally; if the predicate evaluates to false , a PredicateValidationException is thrown. Java DSL example The following route validates the body of the current message using a regular expression: You can also validate a message header - for example: And you can use validate with the simple expression language: XML DSL example To use validate in the XML DSL, the recommended approach is to use the simple expression language: You can also validate a message header - for example: | [
"from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm\"). to(\"activemq:Another.Queue\");",
"from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm\");",
"from(\"direct:start\").setBody(body().append(\" World!\")).to(\"mock:result\");",
"from(\"direct:start\").process(new Processor() { public void process(Exchange exchange) { Message in = exchange.getIn(); in.setBody(in.getBody(String.class) + \" World!\"); } }).to(\"mock:result\");",
"from(\"activemq:My.Queue\"). beanRef(\"myBeanName\", \"myMethodName\"). to(\"activemq:Another.Queue\");",
"<route> <from uri=\"activemq:Input\"/> <bean ref=\"myBeanName\" method=\"doTransform\"/> <to uri=\"activemq:Output\"/> </route>/>",
"AggregationStrategy aggregationStrategy = from(\"direct:start\") .enrich(\"direct:resource\", aggregationStrategy) .to(\"direct:result\"); from(\"direct:resource\")",
"public class ExampleAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange original, Exchange resource) { Object originalBody = original.getIn().getBody(); Object resourceResponse = resource.getOut().getBody(); Object mergeResult = ... // combine original body and resource response if (original.getPattern().isOutCapable()) { original.getOut().setBody(mergeResult); } else { original.getIn().setBody(mergeResult); } return original; } }",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <enrich strategyRef=\"aggregationStrategy\"> <constant>direct:resource</constant> <to uri=\"direct:result\"/> </route> <route> <from uri=\"direct:resource\"/> </route> </camelContext> <bean id=\"aggregationStrategy\" class=\"...\" />",
"from(\"direct:start\") .enrich(\"direct:resource\") .to(\"direct:result\");",
"<route> <from uri=\"direct:start\"/> <enrich uri=\"direct:resource\"/> <to uri=\"direct:result\"/> </route>",
"AggregationStrategy aggregationStrategy = from(\"direct:start\") .enrich(\"direct:resource\", aggregationStrategy) .to(\"direct:result\"); from(\"direct:resource\")",
"public class ExampleAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange original, Exchange resource) { Object originalBody = original.getIn().getBody(); Object resourceResponse = resource.getIn().getBody(); Object mergeResult = ... // combine original body and resource response if (original.getPattern().isOutCapable()) { original.getOut().setBody(mergeResult); } else { original.getIn().setBody(mergeResult); } return original; } }",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <enrich strategyRef=\"aggregationStrategy\"> <constant>direct:resource</constant> </enrich> <to uri=\"direct:result\"/> </route> <route> <from uri=\"direct:resource\"/> </route> </camelContext> <bean id=\"aggregationStrategy\" class=\"...\" />",
"from(\"direct:start\") .enrich().simple(\"http:myserver/USD{header.orderId}/order\") .to(\"direct:result\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <enrich> <simple>http:myserver/USD{header.orderId}/order</simple> </enrich> <to uri=\"direct:result\"/> </route>",
"from(\"activemq:queue:order\") .pollEnrich(\"file://order/data/additional?fileName=orderId\") .to(\"bean:processOrder\");",
"from(\"activemq:queue:order\") .pollEnrich(\"file://order/data/additional?fileName=orderId\", 20000) // timeout is in milliseconds .to(\"bean:processOrder\");",
".pollEnrich(\"file://order/data/additional?fileName=orderId\", 20000, aggregationStrategy)",
"from(\"direct:start\") .pollEnrich(\"file:inbox?fileName=data.txt\") .to(\"direct:result\");",
"<route> <from uri=\"direct:start\"/> <pollEnrich> <constant>file:inbox?fileName=data.txt\"</constant> </pollEnrich> <to uri=\"direct:result\"/> </route>",
"<route> <from uri=\"direct:start\"/> <pollEnrich timeout=\"5000\"> <constant>file:inbox?fileName=data.txt\"</constant> </pollEnrich> <to uri=\"direct:result\"/> </route>",
"from(\"direct:start\") .pollEnrich().simple(\"seda:USD{header.name}\") .to(\"direct:result\");",
"<route> <from uri=\"direct:start\"/> <pollEnrich> <simple>sedaUSD{header.name}</simple> </pollEnrich> <to uri=\"direct:result\"/> </route>",
"<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"activemq:My.Queue\"/> <to uri=\"xslt:classpath:com/acme/content_filter.xsl\"/> <to uri=\"activemq:Another.Queue\"/> </route> </camelContext>",
"<route> <from uri=\"activemq:Input\"/> <setBody><xpath resultType=\"org.w3c.dom.Document\">//foo:bar</xpath></setBody> <to uri=\"activemq:Output\"/> </route>",
"// we need to normalize two types of incoming messages from(\"direct:start\") .choice() .when().xpath(\"/employee\").to(\"bean:normalizer?method=employeeToPerson\") .when().xpath(\"/customer\").to(\"bean:normalizer?method=customerToPerson\") .end() .to(\"mock:result\");",
"// Java public class MyNormalizer { public void employeeToPerson(Exchange exchange, @XPath(\"/employee/name/text()\") String name) { exchange.getOut().setBody(createPerson(name)); } public void customerToPerson(Exchange exchange, @XPath(\"/customer/@name\") String name) { exchange.getOut().setBody(createPerson(name)); } private String createPerson(String name) { return \"<person name=\\\"\" + name + \"\\\"/>\"; } }",
"<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <choice> <when> <xpath>/employee</xpath> <to uri=\"bean:normalizer?method=employeeToPerson\"/> </when> <when> <xpath>/customer</xpath> <to uri=\"bean:normalizer?method=customerToPerson\"/> </when> </choice> <to uri=\"mock:result\"/> </route> </camelContext> <bean id=\"normalizer\" class=\"org.apache.camel.processor.MyNormalizer\"/>",
"body, header:foo*",
"from(\"direct:start\") .to(\"mock:a\") .claimCheck(ClaimCheckOperation.Push) .transform().constant(\"Bye World\") .to(\"mock:b\") .claimCheck(ClaimCheckOperation.Pop) .to(\"mock:c\");",
"from(\"direct:start\") .to(\"mock:a\") .claimCheck(ClaimCheckOperation.Set, \"foo\") .transform().constant(\"Bye World\") .to(\"mock:b\") .claimCheck(ClaimCheckOperation.Get, \"foo\") .to(\"mock:c\") .transform().constant(\"Hi World\") .to(\"mock:d\") .claimCheck(ClaimCheckOperation.Get, \"foo\") .to(\"mock:e\");",
"from(\"direct:start\") .to(\"mock:a\") .claimCheck(ClaimCheckOperation.Push) .transform().constant(\"Bye World\") .setHeader(\"foo\", constant(456)) .removeHeader(\"bar\") .to(\"mock:b\") // only merge in the message headers foo or bar .claimCheck(ClaimCheckOperation.Pop, null, \"header:(foo|bar)\") .to(\"mock:c\");",
"<route> <from uri=\"direct:start\"/> <to uri=\"mock:a\"/> <claimCheck operation=\"Push\"/> <transform> <constant>Bye World</constant> </transform> <to uri=\"mock:b\"/> <claimCheck operation=\"Pop\"/> <to uri=\"mock:c\"/> </route>",
"<route> <from uri=\"direct:start\"/> <to uri=\"mock:a\"/> <claimCheck operation=\"Set\" key=\"foo\"/> <transform> <constant>Bye World</constant> </transform> <to uri=\"mock:b\"/> <claimCheck operation=\"Get\" key=\"foo\"/> <to uri=\"mock:c\"/> <transform> <constant>Hi World</constant> </transform> <to uri=\"mock:d\"/> <claimCheck operation=\"Get\" key=\"foo\"/> <to uri=\"mock:e\"/> </route>",
"<route> <from uri=\"direct:start\"/> <to uri=\"mock:a\"/> <claimCheck operation=\"Push\"/> <transform> <constant>Bye World</constant> </transform> <setHeader headerName=\"foo\"> <constant>456</constant> </setHeader> <removeHeader headerName=\"bar\"/> <to uri=\"mock:b\"/> <!-- only merge in the message headers foo or bar --> <claimCheck operation=\"Pop\" filter=\"header:(foo|bar)\"/> <to uri=\"mock:c\"/> </route>",
"from(\"file://inbox\").sort(body().tokenize(\"\\n\")).to(\"bean:MyServiceBean.processLine\");",
"from(\"file://inbox\").sort(body().tokenize(\"\\n\"), new MyReverseComparator()).to(\"bean:MyServiceBean.processLine\");",
"<route> <from uri=\"file://inbox\"/> <sort> <simple>body</simple> </sort> <beanRef ref=\"myServiceBean\" method=\"processLine\"/> </route>",
"<route> <from uri=\"file://inbox\"/> <sort comparatorRef=\"myReverseComparator\"> <simple>body</simple> </sort> <beanRef ref=\"MyServiceBean\" method=\"processLine\"/> </route> <bean id=\"myReverseComparator\" class=\"com.mycompany.MyReverseComparator\"/>",
"BindyDataFormat bindy = new BindyDataFormat(); bindy.setType(BindyType.Csv); bindy.setClassType(com.example.Order.class); transformer() .fromType(com.example.Order.class) .toType(\"csv:CSVOrder\") .withDataFormat(bindy);",
"<dataFormatTransformer fromType=\"java:com.example.Order\" toType=\"csv:CSVOrder\"> <bindy id=\"csvdf\" type=\"Csv\" classType=\"com.example.Order\"/> </dataFormatTransformer>",
"transformer() .fromType(\"xml\") .toType(\"json\") .withUri(\"dozer:myDozer?mappingFile=myMapping.xml...\");",
"<transformers> <endpointTransformer ref=\"myDozerEndpoint\" fromType=\"xml\" toType=\"json\"/> </transformers>",
"transformer() .fromType(\"xml\") .toType(\"json\") .withJava(com.example.MyCustomTransformer.class);",
"<transformers> <customTransformer className=\"com.example.MyCustomTransformer\" fromType=\"xml\" toType=\"json\"/> </transformers>",
"transformer() .fromType(\"xml:ABCOrder\") .toType(\"xml:XYZOrder\") .withUri(\"xslt:transform.xsl\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <transformers> <endpointTransformer uri=\"xslt:transform.xsl\" fromType=\"xml:ABCOrder\" toType=\"xml:XYZOrder\"/> </transformers> . </camelContext>",
"from(\"direct:abc\") .inputType(\"xml:ABCOrder\") .to(\"direct:xyz\"); from(\"direct:xyz\") .inputType(\"xml:XYZOrder\") .to(\"somewhere:else\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:abc\"/> <inputType urn=\"xml:ABCOrder\"/> <to uri=\"direct:xyz\"/> </route> <route> <from uri=\"direct:xyz\"/> <inputType urn=\"xml:XYZOrder\"/> <to uri=\"somewhere:else\"/> </route> </camelContext>",
"validator() .type(\"csv:CSVOrder\") .withExpression(bodyAs(String.class).contains(\"{name:XOrder}\"));",
"<predicateValidator Type=\"csv:CSVOrder\"> <simple>USD{body} contains 'name:XOrder'</simple> </predicateValidator>",
"validator() .type(\"xml\") .withUri(\"validator:xsd/schema.xsd\");",
"<validators> <endpointValidator uri=\"validator:xsd/schema.xsd\" type=\"xml\"/> </validators>",
"validator() .type(\"json\") .withJava(com.example.MyCustomValidator.class);",
"<validators> <customValidator className=\"com.example.MyCustomValidator\" type=\"json\"/> </validators>",
"validator() .type(\"xml:ABCOrder\") .withUri(\"validator:xsd/schema.xsd\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <validators> <endpointValidator uri=\"validator:xsd/schema.xsd\" type=\"xml:ABCOrder\"/> </validators> </camelContext>",
"from(\"direct:abc\") .inputTypeWithValidate(\"xml:ABCOrder\") .log(\"USD{body}\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:abc\"/> <inputType urn=\"xml:ABCOrder\" validate=\"true\"/> <log message=\"USD{body}\"/> </route> </camelContext>",
"from(\"jms:queue:incoming\") .validate(body(String.class).regex(\"^\\\\w{10}\\\\,\\\\d{2}\\\\,\\\\w{24}USD\")) .to(\"bean:MyServiceBean.processLine\");",
"from(\"jms:queue:incoming\") .validate(header(\"bar\").isGreaterThan(100)) .to(\"bean:MyServiceBean.processLine\");",
"from(\"jms:queue:incoming\") .validate(simple(\"USD{in.header.bar} == 100\")) .to(\"bean:MyServiceBean.processLine\");",
"<route> <from uri=\"jms:queue:incoming\"/> <validate> <simple>USD{body} regex ^\\\\w{10}\\\\,\\\\d{2}\\\\,\\\\w{24}USD</simple> </validate> <beanRef ref=\"myServiceBean\" method=\"processLine\"/> </route> <bean id=\"myServiceBean\" class=\"com.mycompany.MyServiceBean\"/>",
"<route> <from uri=\"jms:queue:incoming\"/> <validate> <simple>USD{in.header.bar} == 100</simple> </validate> <beanRef ref=\"myServiceBean\" method=\"processLine\"/> </route> <bean id=\"myServiceBean\" class=\"com.mycompany.MyServiceBean\"/>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/msgtran |
Chapter 1. Introduction to the Red Hat Quay Operator | Chapter 1. Introduction to the Red Hat Quay Operator Use the content in this chapter to execute the following: Install Red Hat Quay on OpenShift Container Platform using the Red Hat Quay Operator Configure managed, or unmanaged, object storage Configure unmanaged components, such as the database, Redis, routes, TLS, and so on Deploy the Red Hat Quay registry on OpenShift Container Platform using the Red Hat Quay Operator Use advanced features supported by Red Hat Quay Upgrade the Red Hat Quay registry by using the Red Hat Quay Operator 1.1. Red Hat Quay Operator components Red Hat Quay has many dependencies. These dependencies include a database, object storage, Redis, and others. The Red Hat Quay Operator manages an opinionated deployment of Red Hat Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry API. In the QuayRegistry custom resource, the spec.components field configures components. Each component contains two fields: kind (the name of the component), and managed (a boolean that addresses whether the component lifecycle is handled by the Red Hat Quay Operator). By default, all components are managed and auto-filled upon reconciliation for visibility: Example QuayRegistry resource apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: quay managed: true - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true 1.2. Using managed components Unless your QuayRegistry custom resource specifies otherwise, the Red Hat Quay Operator uses defaults for the following managed components: quay: Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, for example, environment variables and number of replicas. This component is new as of Red Hat Quay 3.7 and cannot be set to unmanaged. postgres: For storing the registry metadata, As of Red Hat Quay 3.9, uses a version of PostgreSQL 13 from Software Collections . Note When upgrading from Red Hat Quay 3.8 3.9, the Operator automatically handles upgrading PostgreSQL 10 to PostgreSQL 13. This upgrade is required. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. clair: Provides image vulnerability scanning. redis: Stores live builder logs and the Red Hat Quay tutorial. Also includes the locking mechanism that is required for garbage collection. horizontalpodautoscaler: Adjusts the number of Quay pods depending on memory/cpu consumption. objectstorage: For storing image layer blobs, utilizes the ObjectBucketClaim Kubernetes API which is provided by Noobaa or Red Hat OpenShift Data Foundation. route: Provides an external entrypoint to the Red Hat Quay registry from outside of OpenShift Container Platform. mirror: Configures repository mirror workers to support optional repository mirroring. monitoring: Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting Quay pods. tls: Configures whether Red Hat Quay or OpenShift Container Platform handles SSL/TLS. clairpostgres: Configures a managed Clair database. This is a separate database than the PostgreSQL database used to deploy Red Hat Quay. The Red Hat Quay Operator handles any required configuration and installation work needed for Red Hat Quay to use the managed components. If the opinionated deployment performed by the Red Hat Quay Operator is unsuitable for your environment, you can provide the Red Hat Quay Operator with unmanaged resources, or overrides, as described in the following sections. 1.3. Using unmanaged components for dependencies If you have existing components such as PostgreSQL, Redis, or object storage that you want to use with Red Hat Quay, you first configure them within the Red Hat Quay configuration bundle, or the config.yaml file. Then, they must be referenced in your QuayRegistry bundle as a Kubernetes Secret while indicating which components are unmanaged. Note If you are using an unmanaged PostgreSQL database, and the version is PostgreSQL 10, it is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy . See the following sections for configuring unmanaged components: Using an existing PostgreSQL database Using unmanaged Horizontal Pod Autoscalers Using unmanaged storage Using an unmanaged NooBaa instance Using an unmanaged Redis database Disabling the route component Disabling the monitoring component Disabling the mirroring component 1.4. Config bundle secret The spec.configBundleSecret field is a reference to the metadata.name of a Secret in the same namespace as the QuayRegistry resource. This Secret must contain a config.yaml key/value pair. The config.yaml file is a Red Hat Quay config.yaml file. This field is optional, and is auto-filled by the Red Hat Quay Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret , which is then mounted into the Red Hat Quay application pods. 1.5. Prerequisites for Red Hat Quay on OpenShift Container Platform Consider the following prerequisites prior to deploying Red Hat Quay on OpenShift Container Platform using the Red Hat Quay Operator. 1.5.1. OpenShift Container Platform cluster To deploy the Red Hat Quay Operator, you must have an OpenShift Container Platform 4.5 or later cluster and access to an administrative account. The administrative account must have the ability to create namespaces at the cluster scope. 1.5.2. Resource Requirements Each Red Hat Quay application pod has the following resource requirements: 8 Gi of memory 2000 millicores of CPU The Red Hat Quay Operator creates at least one application pod per Red Hat Quay deployment it manages. Ensure your OpenShift Container Platform cluster has sufficient compute resources for these requirements. 1.5.3. Object Storage By default, the Red Hat Quay Operator uses the ObjectBucketClaim Kubernetes API to provision object storage. Consuming this API decouples the Red Hat Quay Operator from any vendor-specific implementation. Red Hat OpenShift Data Foundation provides this API through its NooBaa component, which is used as an example throughout this documentation. Red Hat Quay can be manually configured to use multiple storage cloud providers, including the following: Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Red Hat Quay) Microsoft Azure Blob Storage Google Cloud Storage Ceph Object Gateway (RADOS) OpenStack Swift CloudFront + S3 For a complete list of object storage providers, the Quay Enterprise 3.x support matrix . 1.5.4. StorageClass When deploying Quay and Clair PostgreSQL databases using the Red Hat Quay Operator, a default StorageClass is configured in your cluster. The default StorageClass used by the Red Hat Quay Operator provisions the Persistent Volume Claims required by the Quay and Clair databases. These PVCs are used to store data persistently, ensuring that your Red Hat Quay registry and Clair vulnerability scanner remain available and maintain their state across restarts or failures. Before proceeding with the installation, verify that a default StorageClass is configured in your cluster to ensure seamless provisioning of storage for Quay and Clair components. | [
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: quay managed: true - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-concepts |
3.2. Procedure Result Caching | 3.2. Procedure Result Caching Cached virtual procedure results are used automatically when a matching set of parameter values is detected for the same procedure execution. Usage of the cached results may be bypassed when used with the OPTION NOCACHE clause. To indicate that a virtual procedure is to be cached, its definition must include a Cache Hint. Results will be cached with the default ttl. The pref_mem and ttl options of the cache hint may also be used for procedure caching. Procedure results cache keys include the input parameter values. To prevent one procedure from filling the cache, at most 256 cache keys may be created per procedure per VDB. A cached procedure will always produce all of its results prior to allowing those results to be consumed and placed in the cache. This differs from normal procedure execution which in some situations allows the returned results to be consumed in a streaming manner. | [
"/*+ cache */ BEGIN END"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/procedure_result_caching |
Chapter 8. Linux Capabilities and Seccomp | Chapter 8. Linux Capabilities and Seccomp Namespaces are one of the building blocks of isolation used by the docker-formatted containers. They provide such an environment for a process, that prevents the process from seeing or interacting with other processes. For example, a process inside a container can have PID 1, and the same process can have a normal PID outside of a container. The process ID (PID) namespace is the mechanism which remaps PIDs inside a container. Detailed information about namespaces can be found in the Overview of Containers in Red Hat Systems guide. However, containers can still access some resources from the host such as the kernel and kernel modules, the /proc file system and the system time. The Linux Capabilities and seccomp features can limit access by containerized processes to the system features. 8.1. Linux Capabilities The Linux capabilities feature breaks up the privileges available to processes run as the root user into smaller groups of privileges. This way a process running with root privilege can be limited to get only the minimal permissions it needs to perform its operation. Docker supports the Linux capabilities as part of the docker run command: with --cap-add and --cap-drop . By default, a container is started with several capabilities that are allowed by default and can be dropped. Other permissions can be added manually. Both --cap-add and --cap-drop support the ALL value, to allow or drop all capabilities. The following list contains all capabilities that are enabled by default when you run a docker container with their descriptions from the capabilities(7) man page: CHOWN - Make arbitrary changes to file UIDs and GIDs DAC_OVERRIDE - Discretionary access control (DAC) - Bypass file read, write, and execute permission checks. FSETID - Don't clear set-user-ID and set-group-ID mode bits when a file is modified; set the set-group-ID bit for a file whose GID does not match the file system or any of the supplementary GIDs of the calling process. FOWNER - Bypass permission checks on operations that normally require the file system UID of the process to match the UID of the file, excluding those operations covered by CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH . MKNOD - Create special files using mknod(2) . NET_RAW - Use RAW and PACKET sockets; bind to any address for transparent proxying. SETGID - Make arbitrary manipulations of process GIDs and supplementary GID list; forge GID when passing socket credentials via UNIX domain sockets; write a group ID mapping in a user namespace. SETUID - Make arbitrary manipulations of process UIDs; forge UID when passing socket credentials via UNIX domain sockets; write a user ID mapping in a user namespace. SETFCAP - Set file capabilities. SETPCAP - If file capabilities are not supported: grant or remove any capability in the caller's permitted capability set to or from any other process. NET_BIND_SERVICE - Bind a socket to Internet domain privileged ports (port numbers less than 1024). SYS_CHROOT - Use chroot(2) to change to a different root directory. KILL - Bypass permission checks for sending signals. This includes use of the ioctl(2) KDSIGACCEPT operation. AUDIT_WRITE - Write records to kernel auditing log. For most applications in containers, from this default list, you can drop the following: AUDIT_WRITE , MKNOD , SETFCAP , SETPCAP . The command will be similar to the following: The rest of the capabilities are not enabled by default and can be added according to your application's needs. You can see the full list in the capabilities(7) man page. A good strategy is to drop all capabilities and add the needed ones back: Important The minimum capabilities required depends on the applications, and figuring those out can take some time and testing. Do not use the SYS_ADMIN capability unless specifically required by the application. Although capabilities break down the root powers in smaller chunks, SYS_ADMIN by itself grants quite a big part of the capabilities and it could potentially present more attack surface. EXAMPLE #1 If you are building a container which the Network Time Protocol (NTP) daemon, ntpd , you will need to add SYS_TIME so this container can modify the host's system time. Otherwise the container will not run. Use this command: EXAMPLE #2 If you want your container to be able to modify network states, you need to add the NET_ADMIN capability: This command limits the number of waiting new connections. Note You cannot modify the capabilities of an already running container. 8.2. Limiting syscalls with seccomp Secure Computing Mode (seccomp) is a kernel feature that allows you to filter system calls to the kernel from a container. The combination of restricted and allowed calls are arranged in profiles, and you can pass different profiles to different containers. Seccomp provides more fine-grained control than capabilities, giving an attacker a limited number of syscalls from the container. The default seccomp profile for docker is a JSON file and can be viewed here: https://github.com/docker/docker/blob/master/profiles/seccomp/default.json . It blocks 44 system calls out of more than 300 available.Making the list stricter would be a trade-off with application compatibility. A table with a significant part of the blocked calls and the reasoning for blocking can be found here: https://docs.docker.com/engine/security/seccomp/ . Seccomp uses the Berkeley Packet Filter (BPF) system, which is programmable on the fly so you can make a custom filter. You can also limit a certain syscall by also customizing the conditions on how or when it should be limited. A seccomp filter replaces the syscall with a pointer to a BPF program, which will execute that program instead of the syscall. All children to a process with this filter will inherit the filter as well. The docker option which is used to operate with seccomp is --security-opt . To explicitly use the default policy for a container, the command will be: If you want to specify your own policy, point the option to your custom file: | [
"docker run --cap-drop AUDIT_WRITE --cap-drop MKNOD --cap-drop SETFCAP --cap-drop SETPCAP <container> <command>",
"docker run --cap-drop ALL --cap-add SYS_TIME ntpd /bin/sh",
"docker run -d --cap-add SYS_TIME ntpd",
"docker run --cap-add NET_ADMIN <image_name> sysctl net.core.somaxconn = 256",
"docker run --security-opt seccomp=/path/to/default/profile.json <container>",
"docker run --security-opt seccomp=/path/to/custom/profile.json <container>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/container_security_guide/linux_capabilities_and_seccomp |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/making-open-source-more-inclusive |
Preface | Preface As a developer or system administrator, You can integrate Red Hat Process Automation Manager with other products and components, such as Spring Boot, Red Hat Single Sign-On, and other supported products. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/pr01 |
5.2. The Virtual Machine Manager Interface | 5.2. The Virtual Machine Manager Interface The following sections provide information about the Virtual Machine Manager user interface. The user interface includes The Virtual Machine Manager main window and The Virtual Machine window . 5.2.1. The Virtual Machine Manager Main Window This following figure shows the Virtual Machine Manager main window interface. Figure 5.2. The Virtual Machine Manager window The Virtual Machine Manager main window title bar displays Virtual Machine Manager . 5.2.1.1. The main window menu bar The following table lists the entries in the Virtual Machine Manager main window menus. Table 5.1. Virtual Machine Manager main window menus Menu name Menu item Description File Add Connection Opens the Add Connection dialog to connect to a local or remote hypervisor. For more information, see Adding a Remote Connection in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. New Virtual Machine Opens the New VM wizard to create a new guest virtual machine. For more information, see Creating Guests with virt-manager in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. Close Closes the Virtual Machine Manager window without closing any Virtual Machine windows. Running virtual machines are not stopped. Exit Closes the Virtual Machine Manager and all Virtual Machine windows. Running virtual machines are not stopped. Edit Connection Details Opens the Connection Details window for the selected connection. Virtual Machine Details Opens the Virtual Machine window for the selected virtual machine. For more information, see The Virtual Machine pane . Delete Deletes the selected connection or virtual machine. Preferences Opens the Preferences dialog box for configuring Virtual Machine Manager options. View Graph Guest CPU Usage Host CPU Usage Memory Usage Disk I/O Network I/O Toggles displays of the selected parameter for the virtual machines in the Virtual Machine Manager main window. Help About Displays the About window with information about the Virtual Machine Manager. 5.2.1.2. The main window toolbar The following table lists the icons in the Virtual Machine Manager main window. Table 5.2. Virtual Machine Manager main window toolbar Icon Description Opens the New VM wizard to create a new guest virtual machine. Opens the Virtual Machine window for the selected virtual machine. Starts the selected virtual machine. Pauses the selected virtual machine. Stops the selected virtual machine. Opens a menu to select one of the following actions to perform on the selected virtual machine: Reboot - Reboots the selected virtual machine. Shut Down - Shuts down the selected virtual machine. Force Reset - Forces the selected virtual machine to shut down and restart. Force Off - Forces the selected virtual machine to shut down. Save - Saves the state of the selected virtual machine to a file. For more information, see Saving a Guest Virtual Machine's Configuration in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. 5.2.1.3. The Virtual Machine list The virtual machine list displays a list of virtual machines to which the Virtual Machine Manager is connected. The virtual machines in the list are grouped by connection. You can sort the list by clicking on the header of a table column. Figure 5.3. The Virtual Machine list The virtual machine list displays graphs with information about the resources being used by each virtual machine. You make resources available for display from the Polling tab of the Preferences dialog in the Edit menu. The following is a list of the resources that can be displayed in the virtual machine list: CPU usage Host CPU usage Memory usage Disk I/O Network I/O You can select the resources to display using the Graph menu item in the View menu. 5.2.2. The Virtual Machine Window This section provides information about the Virtual Machine window interface. Figure 5.4. The Virtual Machine window The title bar displays the name of the virtual machine and the connection that it uses. 5.2.2.1. The Virtual Machine window menu bar The following table lists the entries in the Virtual Machine window menus. Table 5.3. Virtual Machine window menus Menu name Menu item Description File View Manager Opens the main Virtual Machine Manager window. Close Closes only the Virtual Machine window without stopping the virtual machine. Exit Closes the all Virtual Machine Manager windows. Running virtual machines are not stopped. Virtual Machine Run Runs the virtual machine. This option is only available if the virtual machine is not running. Pause Pauses the virtual machine. This option is only available if the virtual machine is already running. Shut Down Opens a menu to select one of the following actions to perform on the virtual machine: Reboot - Reboots the virtual machine. Shut Down - Shuts down the virtual machine. Force Reset - Forces the virtual machine to shut down and restart. Force Off - Forces the virtual machine to shut down. Save - Saves the state of the virtual machine to a file. Clone Creates a clone of the virtual machine. For more information, see Cloning Guests with virt-manager in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. Migrate Opens the Migrate the virtual machine dialog to migrate the virtual machine to a different host. For more information, see Migrating with virt-manager in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. Delete Deletes the virtual machine. Take Screenshot Takes a screenshot of the virtual machine console. Redirect USB Device Opens the Select USB devices for redirection dialog to select USB devices to redirect. For more information, see USB Redirection in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. View Console Opens the Console display in the Virtual Machine pane. Details Opens the Details display in the Virtual Machine pane. For more information, see The virtual machine details window . Snapshots Opens the Snapshots display in the Virtual Machine pane. For more information, see The snapshots window . Fullscreen Displays the virtual machine console in full screen mode. Resize to VM Resizes the display on the full screen to the size and resolution configured for the virtual machine. Scale Display Scales the display of the virtual machine based on the selection of the following sub-menu items: Always - The display of the virtual machine is always scaled to the Virtual Machine window. Only when Fullscreen - The display of the virtual machine is only scaled to the Virtual Machine window when the Virtual Machine window is in Full screen mode.. Never - The display of the virtual machine is never scaled to the Virtual Machine window. Auto resize VM with window - The display of the virtual machine resizes automatically when the Virtual Machine window is resized. Text Consoles Displays the virtual machine display selected in the list. Examples of virtual machine displays include Serial 1 and Graphical Console Spice . Toolbar Toggles the display of the Virtual Machine window toolbar. Send Key Ctrl+Alt+Backspace Ctrl+Alt+Delete Ctrl+Alt+F1 Ctrl+Alt+F2 Ctrl+Alt+F3 Ctrl+Alt+F4 Ctrl+Alt+F5 Ctrl+Alt+F6 Ctrl+Alt+F7 Ctrl+Alt+F8 Ctrl+Alt+F9 Ctrl+Alt+F10 Ctrl+Alt+F11 Ctrl+Alt+F12 Ctrl+Alt+Printscreen Sends the selected key to the virtual machine. 5.2.2.2. The Virtual Machine window toolbar The following table lists the icons in the Virtual Machine window. Table 5.4. Virtual Machine window toolbar Icon Description Displays the graphical console for the virtual machine. Displays the details pane for the virtual machine. Starts the selected virtual machine. Pauses the selected virtual machine. Stops the selected virtual machine. Opens a menu to select one of the following actions to perform on the selected virtual machine: Reboot - Reboots the selected virtual machine. Shut Down - Shuts down the selected virtual machine. Force Reset - Forces the selected virtual machine to shut down and restart. Force Off - Forces the selected virtual machine to shut down. Save - Saves the state of the selected virtual machine to a file. Opens the Snapshots display in the Virtual Machine pane. Displays the virtual machine console in full screen mode. 5.2.2.3. The Virtual Machine pane The Virtual Machine pane displays one of the following: The virtual machine console The virtual machine details window The snapshots window The virtual machine console The virtual machine console shows the graphical output of the virtual machine. Figure 5.5. The Virtual Machine console You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. The virtual machine details window The virtual machine details window provides detailed information about the virtual machine, its hardware and configuration. Figure 5.6. The Virtual Machine details window The virtual machine details window includes a list of virtual machine parameters. When a parameter in the list is selected, information about the selected parameter appear on the right side of the virtual machine details window. You can also add and configure hardware using the virtual machine details window. For more information on the virtual machine details window, see The Virtual Hardware Details Window in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. The snapshots window The virtual machine snapshots window provides a list of snapshots created for the virtual machine. Figure 5.7. The Virtual Machine snapshots window The virtual machine snapshots window includes a list of snapshots saved for the virtual machine. When a snapshot in the list is selected, details about the selected snapshot, including its state, description, and a screenshot, appear on the right side of the virtual machine snapshots window. You can add, delete, and run snapshots using the virtual machine snapshots window. For more information about managing snapshots, see Managing Snapshots in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/virt-manager-user-interface-description |
Chapter 7. Common Features | Chapter 7. Common Features 7.1. Element Property Icons Note Throughout this guide, the elements of each resource are detailed in tables. These tables include a properties column, displaying icons depicting element properties. The meaning of these icons is shown in Table 7.1, "Element property icons" . Table 7.1. Element property icons Property Description Icon Required for creation These elements must be included in the client-provided representation of a resource on creation, but are not mandatory for an update of a resource. Non-updatable These elements cannot have their value changed when updating a resource. Include these elements in a client-provided representation on update only if their values are not altered by the API user. If altered, the API reports an error. Read-only These elements are read-only. Values for read-only elements are not created or modified. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/chap-Common_Features |
17.4. RAID Support in the Installer | 17.4. RAID Support in the Installer The Anaconda installer will automatically detect any hardware and firmware RAID sets on a system, making them available for installation. Anaconda also supports software RAID using mdraid , and can recognize existing mdraid sets. Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, simply create a partition on it spanning the entire disk, and use that partition as the RAID set member. When the root file system uses a RAID set, Anaconda will add special kernel command-line options to the bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root file system. For instructions on configuring RAID during installation, refer to the Red Hat Enterprise Linux 6 Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/raidinstall |
Operations Guide | Operations Guide Red Hat Ceph Storage 8 Operational tasks for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | [
"ceph orch apply mon --placement=\"3 host01 host02 host03\"",
"service_type: node-exporter placement: host_pattern: '*' extra_entrypoint_args: - \"--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2\"",
"cephadm shell",
"ceph orch apply SERVICE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mon --placement=\"3 host01 host02 host03\"",
"ceph orch host label add HOSTNAME_1 LABEL",
"ceph orch host label add host01 mon",
"ceph orch apply DAEMON_NAME label: LABEL",
"ceph orch apply mon label:mon",
"ceph orch host label add HOSTNAME_1 LABEL",
"ceph orch host label add host01 mon",
"ceph orch apply DAEMON_NAME --placement=\"label: LABEL \"",
"ceph orch apply mon --placement=\"label:mon\"",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --daemon_type=mon ceph orch ps --service_name=mon",
"cephadm shell",
"ceph orch host ls",
"ceph orch apply SERVICE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 _HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mgr --placement=\"2 host01 host02 host03\"",
"ceph orch host ls",
"service_type: mon placement: host_pattern: \"mon*\" --- service_type: mgr placement: host_pattern: \"mgr*\" --- service_type: osd service_id: default_drive_group placement: host_pattern: \"osd*\" data_devices: all: true",
"ceph orch set-unmanaged SERVICE_NAME",
"ceph orch set-unmanaged grafana",
"ceph orch set-managed SERVICE_NAME",
"ceph orch set-managed mon",
"touch mon.yaml",
"service_type: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2",
"service_type: mon placement: hosts: - host01 - host02 - host03",
"service_type: SERVICE_NAME placement: label: \" LABEL_1 \"",
"service_type: mon placement: label: \"mon\"",
"extra_container_args: - \"-v\" - \"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\" - \"--security-opt\" - \"label=disable\" - \"cpus=2\" - \"--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2\"",
"cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml",
"cd /var/lib/ceph/mon/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i mon.yaml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"touch mirror.yaml",
"service_type: cephfs-mirror service_name: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3",
"service_type: cephfs-mirror service_name: cephfs-mirror placement: hosts: - host01 - host02 - host03",
"cephadm shell --mount mirror.yaml:/var/lib/ceph/mirror.yaml",
"cd /var/lib/ceph/",
"ceph orch apply -i mirror.yaml",
"ceph orch ls",
"ceph orch ps --daemon_type=cephfs-mirror",
"cephadm shell",
"ceph cephadm get-pub-key > ~/ PATH",
"ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2",
"ssh-copy-id -f -i ~/ceph.pub root@host02",
"host01 host02 host03 [admin] host00",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"cephadm shell",
"ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label= LABEL_NAME_1 , LABEL_NAME_2 ]",
"ceph orch host add host02 10.10.128.70 --labels=mon,mgr",
"ceph orch host ls",
"touch hosts.yaml",
"service_type: host addr: host01 hostname: host01 labels: - mon - osd - mgr --- service_type: host addr: host02 hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: host03 hostname: host03 labels: - mon - osd",
"cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml",
"cd /var/lib/ceph/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i hosts.yaml",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label add HOSTNAME LABEL",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host maintenance enter HOST_NAME [--force]",
"ceph orch host maintenance enter host02 --force",
"ceph orch host maintenance exit HOST_NAME",
"ceph orch host maintenance exit host02",
"ceph orch host ls",
"ceph mon set election_strategy {classic|disallow|connectivity}",
"cephadm shell",
"ceph orch apply mon --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mon --placement=\"host01 host02 host03\"",
"ceph orch apply mon host01 ceph orch apply mon host02 ceph orch apply mon host03",
"ceph orch host label add HOSTNAME_1 LABEL",
"ceph orch host label add host01 mon",
"ceph orch apply mon --placement=\" HOST_NAME_1 :mon HOST_NAME_2 :mon HOST_NAME_3 :mon\"",
"ceph orch apply mon --placement=\"host01:mon host02:mon host03:mon\"",
"ceph orch apply mon --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mon --placement=\"3 host01 host02 host03\"",
"ceph orch apply mon NUMBER_OF_DAEMONS",
"ceph orch apply mon 3",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"touch mon.yaml",
"service_type: mon placement: hosts: - HOST_NAME_1 - HOST_NAME_2",
"service_type: mon placement: hosts: - host01 - host02",
"cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml",
"cd /var/lib/ceph/mon/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i mon.yaml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"cephadm shell",
"ceph orch apply mon --unmanaged",
"ceph orch daemon add mon HOST_NAME_1 : IP_OR_NETWORK",
"ceph orch daemon add mon host03:10.1.2.123",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"cephadm shell",
"ceph orch apply mon \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"",
"ceph orch apply mon \"2 host01 host03\"",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"ssh root@ MONITOR_ID",
"ssh root@host00",
"cephadm unit --name DAEMON_NAME . HOSTNAME stop",
"cephadm unit --name mon.host00 stop",
"cephadm shell --name DAEMON_NAME . HOSTNAME",
"cephadm shell --name mon.host00",
"ceph-mon -i HOSTNAME --extract-monmap TEMP_PATH",
"ceph-mon -i host01 --extract-monmap /tmp/monmap 2022-01-05T11:13:24.440+0000 7f7603bd1700 -1 wrote monmap to /tmp/monmap",
"monmaptool TEMPORARY_PATH --rm HOSTNAME",
"monmaptool /tmp/monmap --rm host01",
"ceph-mon -i HOSTNAME --inject-monmap TEMP_PATH",
"ceph-mon -i host00 --inject-monmap /tmp/monmap",
"cephadm unit --name DAEMON_NAME . HOSTNAME start",
"cephadm unit --name mon.host00 start",
"ceph -s",
"cephadm shell",
"ceph orch apply mgr --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mgr --placement=\"host01 host02 host03\"",
"ceph orch apply mgr NUMBER_OF_DAEMONS",
"ceph orch apply mgr 3",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mgr",
"cephadm shell",
"ceph orch apply mgr \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"",
"ceph orch apply mgr \"2 host01 host03\"",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mgr",
"ceph mgr module enable dashboard ceph mgr module ls MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) cephadm on dashboard on iostat on nfs on prometheus on restful on alerts - diskprediction_local - influx - insights - k8sevents - localpool - mds_autoscaler - mirroring - osd_perf_query - osd_support - rgw - rook - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix - ceph mgr services { \"dashboard\": \"http://myserver.com:7789/\", \"restful\": \"https://myserver.com:8789/\" }",
"[mon] mgr initial modules = dashboard balancer",
"ceph <command | help>",
"ceph osd set-require-min-compat-client luminous",
"ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it",
"ceph features",
"ceph osd set-require-min-compat-client reef",
"ceph osd set-require-min-compat-client reef --yes-i-really-mean-it",
"ceph features",
"ceph osd set-require-min-compat-client reef",
"ceph osd set-require-min-compat-client reef --yes-i-really-mean-it",
"ceph features",
"ceph mgr module enable balancer",
"ceph balancer on",
"ceph balancer mode crush-compat",
"ceph balancer mode upmap",
"ceph balancer status",
"ceph balancer on",
"ceph balancer on ceph balancer mode crush-compat ceph balancer status { \"active\": true, \"last_optimize_duration\": \"0:00:00.001174\", \"last_optimize_started\": \"Fri Nov 22 11:09:18 2024\", \"mode\": \"crush-compact\", \"no_optimization_needed\": false, \"optimize_result\": \"Unable to find further optimization, change balancer mode and retry might help\", \"plans\": [] }",
"ceph balancer off ceph balancer status { \"active\": false, \"last_optimize_duration\": \"\", \"last_optimize_started\": \"\", \"mode\": \"crush-compat\", \"no_optimization_needed\": false, \"optimize_result\": \"\", \"plans\": [] }",
"ceph config-key set mgr target_max_misplaced_ratio THRESHOLD_PERCENTAGE",
"ceph config-key set mgr target_max_misplaced_ratio .07",
"ceph config set mgr mgr/balancer/sleep_interval 60",
"ceph config set mgr mgr/balancer/begin_time 0000",
"ceph config set mgr mgr/balancer/end_time 2359",
"ceph config set mgr mgr/balancer/begin_weekday 0",
"ceph config set mgr mgr/balancer/end_weekday 6",
"ceph config set mgr mgr/balancer/pool_ids 1,2,3",
"ceph balancer eval",
"ceph balancer eval POOL_NAME",
"ceph balancer eval rbd",
"ceph balancer eval-verbose",
"ceph balancer optimize PLAN_NAME",
"ceph balancer optimize rbd_123",
"ceph balancer show PLAN_NAME",
"ceph balancer show rbd_123",
"ceph balancer rm PLAN_NAME",
"ceph balancer rm rbd_123",
"ceph balancer status",
"ceph balancer eval PLAN_NAME",
"ceph balancer eval rbd_123",
"ceph balancer execute PLAN_NAME",
"ceph balancer execute rbd_123",
"ceph mgr module enable balancer",
"ceph balancer on",
"ceph osd set-require-min-compat-client reef",
"ceph osd set-require-min-compat-client reef --yes-i-really-mean-it",
"You can check what client versions are in use with: the ceph features command.",
"ceph features",
"ceph balancer mode upmap-read ceph balancer mode read",
"ceph balancer status",
"ceph balancer status { \"active\": true, \"last_optimize_duration\": \"0:00:00.013640\", \"last_optimize_started\": \"Mon Nov 22 14:47:57 2024\", \"mode\": \"upmap-read\", \"no_optimization_needed\": true, \"optimize_result\": \"Unable to find further optimization, or pool(s) pg_num is decreasing, or distribution is already perfect\", \"plans\": [] }",
"ceph osd getmap -o map",
"ospmaptool map -upmap out.txt",
"source out.txt",
"ceph osd pool ls detail",
"ceph osd pool ls detail pool 1 '.mgr' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 17 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 3.00 pool 2 'cephfs.a.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 55 lfor 0/0/25 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 1.50 pool 3 'cephfs.a.data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 27 lfor 0/0/25 flags hashpspool,bulk stripe_width 0 application cephfs read_balance_score 1.31",
"ceph osd getmap -o om",
"got osdmap epoch 56",
"osdmaptool om --read out.txt --read-pool _POOL_NAME_ [--vstart]",
"osdmaptool om --read out.txt --read-pool cephfs.a.meta ./bin/osdmaptool: osdmap file 'om' writing upmap command output to: out.txt ---------- BEFORE ------------ osd.0 | primary affinity: 1 | number of prims: 4 osd.1 | primary affinity: 1 | number of prims: 8 osd.2 | primary affinity: 1 | number of prims: 4 read_balance_score of 'cephfs.a.meta': 1.5 ---------- AFTER ------------ osd.0 | primary affinity: 1 | number of prims: 5 osd.1 | primary affinity: 1 | number of prims: 6 osd.2 | primary affinity: 1 | number of prims: 5 read_balance_score of 'cephfs.a.meta': 1.13 num changes: 2",
"source out.txt",
"cat out.txt ceph osd pg-upmap-primary 2.3 0 ceph osd pg-upmap-primary 2.4 2 source out.txt change primary for pg 2.3 to osd.0 change primary for pg 2.4 to osd.2",
"Error EPERM: min_compat_client luminous < reef, which is required for pg-upmap-primary. Try 'ceph osd set-require-min-compat-client reef' before using the new interface",
"cephadm shell",
"ceph mgr module enable alerts",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator\", \"pg_autoscaler\", \"progress\", \"rbd_support\", \"status\", \"telemetry\", \"volumes\" ], \"enabled_modules\": [ \"alerts\", \"cephadm\", \"dashboard\", \"iostat\", \"nfs\", \"prometheus\", \"restful\" ]",
"ceph config set mgr mgr/alerts/smtp_host SMTP_SERVER ceph config set mgr mgr/alerts/smtp_destination RECEIVER_EMAIL_ADDRESS ceph config set mgr mgr/alerts/smtp_sender SENDER_EMAIL_ADDRESS",
"ceph config set mgr mgr/alerts/smtp_host smtp.example.com ceph config set mgr mgr/alerts/smtp_destination [email protected] ceph config set mgr mgr/alerts/smtp_sender [email protected]",
"ceph config set mgr mgr/alerts/smtp_port PORT_NUMBER",
"ceph config set mgr mgr/alerts/smtp_port 587",
"ceph config set mgr mgr/alerts/smtp_user USERNAME ceph config set mgr mgr/alerts/smtp_password PASSWORD",
"ceph config set mgr mgr/alerts/smtp_user admin1234 ceph config set mgr mgr/alerts/smtp_password admin1234",
"ceph config set mgr mgr/alerts/smtp_from_name CLUSTER_NAME",
"ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Test'",
"ceph config set mgr mgr/alerts/interval INTERVAL",
"ceph config set mgr mgr/alerts/interval \"5m\"",
"ceph alerts send",
"ceph config set mgr/crash/warn_recent_interval 0",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ]",
"ceph crash post -i meta",
"ceph crash ls",
"ceph crash ls-new",
"ceph crash ls-new",
"ceph crash stat 8 crashes recorded 8 older than 1 days old: 2022-05-20T08:30:14.533316Z_4ea88673-8db6-4959-a8c6-0eea22d305c2 2022-05-20T08:30:14.590789Z_30a8bb92-2147-4e0f-a58b-a12c2c73d4f5 2022-05-20T08:34:42.278648Z_6a91a778-bce6-4ef3-a3fb-84c4276c8297 2022-05-20T08:34:42.801268Z_e5f25c74-c381-46b1-bee3-63d891f9fc2d 2022-05-20T08:34:42.803141Z_96adfc59-be3a-4a38-9981-e71ad3d55e47 2022-05-20T08:34:42.830416Z_e45ed474-550c-44b3-b9bb-283e3f4cc1fe 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d 2022-05-24T19:58:44.315282Z_1847afbc-f8a9-45da-94e8-5aef0738954e",
"ceph crash info CRASH_ID",
"ceph crash info 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d { \"assert_condition\": \"session_map.sessions.empty()\", \"assert_file\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc\", \"assert_func\": \"virtual Monitor::~Monitor()\", \"assert_line\": 287, \"assert_msg\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: In function 'virtual Monitor::~Monitor()' thread 7f67a1aeb700 time 2022-05-24T19:58:42.545485+0000\\n/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: 287: FAILED ceph_assert(session_map.sessions.empty())\\n\", \"assert_thread_name\": \"ceph-mon\", \"backtrace\": [ \"/lib64/libpthread.so.0(+0x12b30) [0x7f679678bb30]\", \"gsignal()\", \"abort()\", \"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f6798c8d37b]\", \"/usr/lib64/ceph/libceph-common.so.2(+0x276544) [0x7f6798c8d544]\", \"(Monitor::~Monitor()+0xe30) [0x561152ed3c80]\", \"(Monitor::~Monitor()+0xd) [0x561152ed3cdd]\", \"main()\", \"__libc_start_main()\", \"_start()\" ], \"ceph_version\": \"16.2.8-65.el8cp\", \"crash_id\": \"2022-07-06T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d\", \"entity_name\": \"mon.ceph-adm4\", \"os_id\": \"rhel\", \"os_name\": \"Red Hat Enterprise Linux\", \"os_version\": \"8.5 (Ootpa)\", \"os_version_id\": \"8.5\", \"process_name\": \"ceph-mon\", \"stack_sig\": \"957c21d558d0cba4cee9e8aaf9227b3b1b09738b8a4d2c9f4dc26d9233b0d511\", \"timestamp\": \"2022-07-06T19:58:42.549073Z\", \"utsname_hostname\": \"host02\", \"utsname_machine\": \"x86_64\", \"utsname_release\": \"4.18.0-240.15.1.el8_3.x86_64\", \"utsname_sysname\": \"Linux\", \"utsname_version\": \"#1 SMP Wed Jul 06 03:12:15 EDT 2022\" }",
"ceph crash prune KEEP",
"ceph crash prune 60",
"ceph crash archive CRASH_ID",
"ceph crash archive 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d",
"ceph crash archive-all",
"ceph crash rm CRASH_ID",
"ceph crash rm 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d",
"ceph telemetry on",
"ceph telemetry enable channel basic ceph telemetry enable channel crash ceph telemetry enable channel device ceph telemetry enable channel ident ceph telemetry enable channel perf ceph telemetry disable channel basic ceph telemetry disable channel crash ceph telemetry disable channel device ceph telemetry disable channel ident ceph telemetry disable channel perf",
"ceph telemetry enable channel basic crash device ident perf ceph telemetry disable channel basic crash device ident perf",
"ceph telemetry enable channel all ceph telemetry disable channel all",
"ceph telemetry show",
"ceph telemetry preview",
"ceph telemetry show-device",
"ceph telemetry preview-device",
"ceph telemetry show-all",
"ceph telemetry preview-all",
"ceph telemetry show CHANNEL_NAME",
"ceph telemetry preview CHANNEL_NAME",
"ceph telemetry collection ls",
"ceph telemetry diff",
"ceph telemetry on ceph telemetry enable channel CHANNEL_NAME",
"ceph config set mgr mgr/telemetry/interval INTERVAL",
"ceph config set mgr mgr/telemetry/interval 72",
"ceph telemetry status",
"ceph telemetry send",
"ceph config set mgr mgr/telemetry/proxy PROXY_URL",
"ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080",
"ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080",
"ceph config set mgr mgr/telemetry/contact '_CONTACT_NAME_' ceph config set mgr mgr/telemetry/description '_DESCRIPTION_' ceph config set mgr mgr/telemetry/channel_ident true",
"ceph config set mgr mgr/telemetry/contact 'John Doe <[email protected]>' ceph config set mgr mgr/telemetry/description 'My first Ceph cluster' ceph config set mgr mgr/telemetry/channel_ident true",
"ceph config set mgr mgr/telemetry/leaderboard true",
"ceph telemetry off",
"ceph config set osd osd_memory_target_autotune true",
"osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS )",
"ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2",
"ceph config set osd.123 osd_memory_target 7860684936",
"ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES",
"ceph config set osd/host:host01 osd_memory_target 1000000000",
"ceph orch host label add HOSTNAME _no_autotune_memory",
"ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"cephadm shell lsmcli ldl",
"cephadm shell ceph config set mgr mgr/cephadm/device_enhanced_scan true",
"ceph orch device ls",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch device zap HOSTNAME FILE_PATH --force",
"ceph orch device zap host02 /dev/sdb --force",
"ceph orch device ls",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch apply osd --all-available-devices",
"ceph orch apply osd --all-available-devices --unmanaged=true",
"ceph orch ls",
"ceph osd tree",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch daemon add osd HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph orch daemon add osd --method raw HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd --method raw host02:/dev/sdb",
"ceph orch ls osd",
"ceph osd tree",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=osd",
"touch osd_spec.yaml",
"service_type: osd service_id: SERVICE_ID placement: host_pattern: '*' # optional data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH osds_per_device: NUMBER_OF_DEVICES # optional db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH encrypted: true",
"service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: all: true paths: - /dev/sdb encrypted: true",
"service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '80G' db_devices: size: '40G:' paths: - /dev/sdc",
"service_type: osd service_id: all-available-devices encrypted: \"true\" method: raw placement: host_pattern: \"*\" data_devices: all: \"true\"",
"service_type: osd service_id: osd_spec_hdd placement: host_pattern: '*' data_devices: rotational: 0 db_devices: model: Model-name limit: 2 --- service_type: osd service_id: osd_spec_ssd placement: host_pattern: '*' data_devices: model: Model-name db_devices: vendor: Vendor-name",
"service_type: osd service_id: osd_spec_node_one_to_five placement: host_pattern: 'node[1-5]' data_devices: rotational: 1 db_devices: rotational: 0 --- service_type: osd service_id: osd_spec_six_to_ten placement: host_pattern: 'node[6-10]' data_devices: model: Model-name db_devices: model: Model-name",
"service_type: osd service_id: osd_using_paths placement: hosts: - host01 - host02 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc wal_devices: paths: - /dev/sdd",
"service_type: osd service_id: multiple_osds placement: hosts: - host01 - host02 osds_per_device: 4 data_devices: paths: - /dev/sdb",
"service_type: osd service_id: SERVICE_ID placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH",
"service_type: osd service_id: osd_spec placement: hosts: - machine1 data_devices: paths: - /dev/vg_hdd/lv_hdd db_devices: paths: - /dev/vg_nvme/lv_nvme",
"service_type: osd service_id: OSD_BY_ID_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH",
"service_type: osd service_id: osd_by_id_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 db_devices: paths: - /dev/disk/by-id/nvme-nvme.1b36-31323334-51454d55204e564d65204374726c-00000001",
"service_type: osd service_id: OSD_BY_PATH_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH",
"service_type: osd service_id: osd_by_path_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-path/pci-0000:0d:00.0-scsi-0:0:0:4 db_devices: paths: - /dev/disk/by-path/pci-0000:00:02.0-nvme-1",
"cephadm shell --mount osd_spec.yaml:/var/lib/ceph/osd/osd_spec.yaml",
"cd /var/lib/ceph/osd/",
"ceph orch apply -i osd_spec.yaml --dry-run",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i osd_spec.yaml",
"ceph orch ls osd",
"ceph osd tree",
"cephadm shell",
"ceph osd tree",
"ceph orch osd rm OSD_ID [--replace] [--force] --zap",
"ceph orch osd rm 0 --zap",
"ceph orch osd rm OSD_ID OSD_ID --zap",
"ceph orch osd rm 2 5 --zap",
"ceph orch osd rm status OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 9 host01 done, waiting for purge 0 False False True 2023-06-06 17:50:50.525690 10 host03 done, waiting for purge 0 False False True 2023-06-06 17:49:38.731533 11 host02 done, waiting for purge 0 False False True 2023-06-06 17:48:36.641105",
"ceph osd tree",
"cephadm shell",
"ceph osd metadata -f plain | grep device_paths \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdi=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdf=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdg=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdh=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdk=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdl=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdj=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdm=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", [.. output omitted ..]",
"ceph osd tree",
"ceph orch osd rm OSD_ID --replace [--force]",
"ceph orch osd rm 0 --replace",
"ceph orch osd rm status",
"ceph orch pause ceph orch status Backend: cephadm Available: Yes Paused: Yes",
"ceph orch device zap node.example.com /dev/sdi --force zap successful for /dev/sdi on node.example.com ceph orch device zap node.example.com /dev/sdf --force zap successful for /dev/sdf on node.example.com",
"ceph orch resume",
"ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.77112 root default -3 0.77112 host node 0 hdd 0.09639 osd.0 up 1.00000 1.00000 1 hdd 0.09639 osd.1 up 1.00000 1.00000 2 hdd 0.09639 osd.2 up 1.00000 1.00000 3 hdd 0.09639 osd.3 up 1.00000 1.00000 4 hdd 0.09639 osd.4 up 1.00000 1.00000 5 hdd 0.09639 osd.5 up 1.00000 1.00000 6 hdd 0.09639 osd.6 up 1.00000 1.00000 7 hdd 0.09639 osd.7 up 1.00000 1.00000 [.. output omitted ..]",
"ceph osd tree",
"ceph osd metadata 0 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\", ceph osd metadata 1 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\",",
"cephadm shell",
"ceph orch osd rm OSD_ID [--replace]",
"ceph orch osd rm 8 --replace Scheduled OSD(s) for removal",
"ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.32297 root default -9 0.05177 host host10 3 hdd 0.01520 osd.3 up 1.00000 1.00000 13 hdd 0.02489 osd.13 up 1.00000 1.00000 17 hdd 0.01169 osd.17 up 1.00000 1.00000 -13 0.05177 host host11 2 hdd 0.01520 osd.2 up 1.00000 1.00000 15 hdd 0.02489 osd.15 up 1.00000 1.00000 19 hdd 0.01169 osd.19 up 1.00000 1.00000 -7 0.05835 host host12 20 hdd 0.01459 osd.20 up 1.00000 1.00000 21 hdd 0.01459 osd.21 up 1.00000 1.00000 22 hdd 0.01459 osd.22 up 1.00000 1.00000 23 hdd 0.01459 osd.23 up 1.00000 1.00000 -5 0.03827 host host04 1 hdd 0.01169 osd.1 up 1.00000 1.00000 6 hdd 0.01129 osd.6 up 1.00000 1.00000 7 hdd 0.00749 osd.7 up 1.00000 1.00000 9 hdd 0.00780 osd.9 up 1.00000 1.00000 -3 0.03816 host host05 0 hdd 0.01169 osd.0 up 1.00000 1.00000 8 hdd 0.01129 osd.8 destroyed 0 1.00000 12 hdd 0.00749 osd.12 up 1.00000 1.00000 16 hdd 0.00769 osd.16 up 1.00000 1.00000 -15 0.04237 host host06 5 hdd 0.01239 osd.5 up 1.00000 1.00000 10 hdd 0.01540 osd.10 up 1.00000 1.00000 11 hdd 0.01459 osd.11 up 1.00000 1.00000 -11 0.04227 host host07 4 hdd 0.01239 osd.4 up 1.00000 1.00000 14 hdd 0.01529 osd.14 up 1.00000 1.00000 18 hdd 0.01459 osd.18 up 1.00000 1.00000",
"ceph-volume lvm zap --osd-id OSD_ID",
"ceph-volume lvm zap --osd-id 8 Zapping: /dev/vg1/data-lv2 Closing encrypted path /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/sbin/cryptsetup remove /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/bin/dd if=/dev/zero of=/dev/vg1/data-lv2 bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.034742 s, 302 MB/s Zapping successful for OSD: 8",
"ceph-volume lvm list",
"cat osd.yml service_type: osd service_id: osd_service placement: hosts: - host03 data_devices: paths: - /dev/vg1/data-lv2 db_devices: paths: - /dev/vg1/db-lv1",
"ceph orch apply -i osd.yml Scheduled osd.osd_service update",
"ceph -s ceph osd tree",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel-root 253:0 0 17G 0 lvm / └─rhel-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--5726d3e9--4fdb--4eda--b56a--3e0df88d663f-osd--block--3ceb89ec--87ef--46b4--99c6--2a56bac09ff0 253:2 0 10G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--d7c9ab50--f5c0--4be0--a8fd--e0313115f65c-osd--block--37c370df--1263--487f--a476--08e28bdbcd3c 253:4 0 10G 0 lvm sdd 8:48 0 10G 0 disk ├─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--31b20150--4cbc--4c2c--9c8f--6f624f3bfd89 253:7 0 2.5G 0 lvm └─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--1bee5101--dbab--4155--a02c--e5a747d38a56 253:9 0 2.5G 0 lvm sde 8:64 0 10G 0 disk sdf 8:80 0 10G 0 disk └─ceph--412ee99b--4303--4199--930a--0d976e1599a2-osd--block--3a99af02--7c73--4236--9879--1fad1fe6203d 253:6 0 10G 0 lvm sdg 8:96 0 10G 0 disk └─ceph--316ca066--aeb6--46e1--8c57--f12f279467b4-osd--block--58475365--51e7--42f2--9681--e0c921947ae6 253:8 0 10G 0 lvm sdh 8:112 0 10G 0 disk ├─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--0dfe6eca--ba58--438a--9510--d96e6814d853 253:3 0 5G 0 lvm └─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--26b70c30--8817--45de--8843--4c0932ad2429 253:5 0 5G 0 lvm sr0",
"cephadm shell",
"ceph-volume lvm list /dev/sdh ====== osd.2 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 block device /dev/ceph-5726d3e9-4fdb-4eda-b56a-3e0df88d663f/osd-block-3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 block uuid GkWLoo-f0jd-Apj2-Zmwj-ce0h-OY6J-UuW8aD cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 db uuid 6gSPoc-L39h-afN3-rDl6-kozT-AX9S-XR20xM encrypted 0 osd fsid 3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh ====== osd.5 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 block device /dev/ceph-d7c9ab50-f5c0-4be0-a8fd-e0313115f65c/osd-block-37c370df-1263-487f-a476-08e28bdbcd3c block uuid Eay3I7-fcz5-AWvp-kRcI-mJaH-n03V-Zr0wmJ cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 db uuid mwSohP-u72r-DHcT-BPka-piwA-lSwx-w24N0M encrypted 0 osd fsid 37c370df-1263-487f-a476-08e28bdbcd3c osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh",
"cat osds.yml service_type: osd service_id: non-colocated unmanaged: true placement: host_pattern: 'ceph*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sdh",
"ceph orch apply -i osds.yml Scheduled osd.non-colocated update",
"ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 9m ago 4d count:1 crash 3/4 4d ago 4d * grafana ?:3000 1/1 9m ago 4d count:1 mgr 1/2 4d ago 4d count:2 mon 3/5 4d ago 4d count:5 node-exporter ?:9100 3/4 4d ago 4d * osd.non-colocated 8 4d ago 5s <unmanaged> prometheus ?:9095 1/1 9m ago 4d count:1",
"ceph orch osd rm 2 5 --zap --replace Scheduled OSD(s) for removal",
"ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.1 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 996 KiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.0 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.5",
"cat osds.yml service_type: osd service_id: non-colocated unmanaged: false placement: host_pattern: 'ceph01*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sde",
"ceph orch apply -i osds.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any of these conditions change, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+-------+-------+----------+----------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+-------+-------+----------+----------+-----+ |osd |non-colocated |host02 |/dev/sdb |/dev/sde |- | |osd |non-colocated |host02 |/dev/sdc |/dev/sde |- | +---------+-------+-------+----------+----------+-----+",
"ceph orch apply -i osds.yml Scheduled osd.non-colocated update",
"ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.5 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.5",
"ceph-volume lvm list /dev/sde ====== osd.2 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 block device /dev/ceph-a4afcb78-c804-4daf-b78f-3c7ad1ed0379/osd-block-564b3d2f-0f85-4289-899a-9f98a2641979 block uuid ITPVPa-CCQ5-BbFa-FZCn-FeYt-c5N4-ssdU41 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 db uuid HF1bYb-fTK7-0dcB-CHzW-xvNn-dCym-KKdU5e encrypted 0 osd fsid 564b3d2f-0f85-4289-899a-9f98a2641979 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sde ====== osd.5 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd block device /dev/ceph-b37c8310-77f9-4163-964b-f17b4c29c537/osd-block-b42a4f1f-8e19-4416-a874-6ff5d305d97f block uuid 0LuPoz-ao7S-UL2t-BDIs-C9pl-ct8J-xh5ep4 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd db uuid SvmXms-iWkj-MTG7-VnJj-r5Mo-Moiw-MsbqVD encrypted 0 osd fsid b42a4f1f-8e19-4416-a874-6ff5d305d97f osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sde",
"cephadm shell",
"ceph osd tree",
"ceph orch osd rm stop OSD_ID",
"ceph orch osd rm stop 0",
"ceph orch osd rm status",
"ceph osd tree",
"cephadm shell",
"ceph cephadm osd activate HOSTNAME",
"ceph cephadm osd activate host03",
"ceph orch ls",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=osd",
"ceph -w",
"ceph config-key set mgr/cephadm/ HOSTNAME /grafana_key -i PRESENT_WORKING_DIRECTORY /key.pem ceph config-key set mgr/cephadm/ HOSTNAME /grafana_crt -i PRESENT_WORKING_DIRECTORY /certificate.pem",
"ceph mgr module enable prometheus",
"ceph orch redeploy prometheus",
"cd /var/lib/ceph/ DAEMON_PATH /",
"cd /var/lib/ceph/monitoring/",
"touch monitoring.yml",
"service_type: prometheus service_name: prometheus placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: node-exporter --- service_type: alertmanager service_name: alertmanager placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: grafana service_name: grafana placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: ceph-exporter",
"ceph orch apply -i monitoring.yml",
"ceph orch ls",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=prometheus",
"cephadm shell",
"ceph orch rm SERVICE_NAME --force",
"ceph orch rm grafana ceph orch rm prometheus ceph orch rm node-exporter ceph orch rm ceph-exporter ceph orch rm alertmanager ceph mgr module disable prometheus",
"ceph orch status",
"ceph orch ls",
"ceph orch ps",
"ceph orch ps",
"mkdir /etc/ceph/",
"cd /etc/ceph/",
"ceph config generate-minimal-conf minimal ceph.conf for 417b1d7a-a0e6-11eb-b940-001a4a000740 [global] fsid = 417b1d7a-a0e6-11eb-b940-001a4a000740 mon_host = [v2:10.74.249.41:3300/0,v1:10.74.249.41:6789/0]",
"mkdir /etc/ceph/",
"cd /etc/ceph/",
"ceph auth get-or-create client. CLIENT_NAME -o /etc/ceph/ NAME_OF_THE_FILE",
"ceph auth get-or-create client.fs -o /etc/ceph/ceph.keyring",
"cat ceph.keyring [client.fs] key = AQAvoH5gkUCsExAATz3xCBLd4n6B6jRv+Z7CVQ==",
"cephadm shell",
"ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph fs volume create test --placement=\"2 host01 host02\"",
"ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]",
"ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64",
"ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL",
"ceph fs new test cephfs_metadata cephfs_data",
"ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mds test --placement=\"2 host01 host02\"",
"ceph orch ls",
"ceph fs ls ceph fs status",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mds",
"touch mds.yaml",
"service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3",
"service_type: mds service_id: fs_name placement: hosts: - host01 - host02",
"cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml",
"cd /var/lib/ceph/mds/",
"cephadm shell",
"cd /var/lib/ceph/mds/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i mds.yaml",
"ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL",
"ceph fs new test metadata_pool data_pool",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mds",
"cephadm shell",
"ceph config set mon mon_allow_pool_delete true",
"ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it",
"ceph fs volume rm cephfs-new --yes-i-really-mean-it",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm mds.test",
"ceph orch ps",
"ceph orch ps",
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"2 label:rgw\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"touch radosgw.yml",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --endpoints=http://rgw.example.com:80",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps",
"dnf install -y net-snmp-utils net-snmp",
"firewall-cmd --zone=public --add-port=162/udp firewall-cmd --zone=public --add-port=162/udp --permanent",
"curl -o CEPH_MIB.txt -L https://raw.githubusercontent.com/ceph/ceph/master/monitoring/snmp/CEPH-MIB.txt scp CEPH_MIB.txt root@host02:/usr/share/snmp/mibs",
"mkdir /root/snmptrapd/",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x_ENGINE_ID_ SNMPV3_AUTH_USER_NAME AUTH_PROTOCOL SNMP_V3_AUTH_PASSWORD PRIVACY_PROTOCOL PRIVACY_PASSWORD authuser log,execute SNMP_V3_AUTH_USER_NAME authCommunity log,execute,net SNMP_COMMUNITY_FOR_SNMPV2",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n authCommunity log,execute,net public",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword authuser log,execute myuser",
"snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword DES mysecret authuser log,execute myuser",
"snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret",
"/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/ CONFIGURATION_FILE -Of -Lo :162",
"/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/snmptrapd_auth.conf -Of -Lo :162",
"NET-SNMP version 5.8 Agent Address: 0.0.0.0 Agent Hostname: <UNKNOWN> Date: 15 - 5 - 12 - 8 - 10 - 4461391 Enterprise OID: . Trap Type: Cold Start Trap Sub-Type: 0 Community/Infosec Context: TRAP2, SNMP v3, user myuser, context Uptime: 0 Description: Cold Start PDU Attribute/Value Pair Array: .iso.org.dod.internet.mgmt.mib-2.1.3.0 = Timeticks: (292276100) 3 days, 19:52:41.00 .iso.org.dod.internet.snmpV2.snmpModules.1.1.4.1.0 = OID: .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.1 = STRING: \"1.3.6.1.4.1.50495.1.2.1.6.2[alertname=CephMgrPrometheusModuleInactive]\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.2 = STRING: \"critical\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.3 = STRING: \"Status: critical - Alert: CephMgrPrometheusModuleInactive Summary: Ceph's mgr/prometheus module is not available Description: The mgr/prometheus module at 10.70.39.243:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'\"",
"cephadm shell",
"ceph orch host label add HOSTNAME snmp-gateway",
"ceph orch host label add host02 snmp-gateway",
"cat snmp_creds.yml snmp_community: public",
"cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_community: public port: 9464 snmp_destination: 192.168.122.73:162 snmp_version: V2c",
"cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword",
"cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3",
"cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret",
"cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser snmp_v3_priv_password: mysecret engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3",
"ceph orch apply snmp-gateway --snmp_version= V2c_OR_V3 --destination= SNMP_DESTINATION [--port= PORT_NUMBER ] [--engine-id=8000C53F_CLUSTER_FSID_WITHOUT_DASHES_] [--auth-protocol= MDS_OR_SHA ] [--privacy_protocol= DES_OR_AES ] -i FILENAME",
"ceph orch apply -i FILENAME .yml",
"ceph orch apply snmp-gateway --snmp-version=V2c --destination=192.168.122.73:162 --port=9464 -i snmp_creds.yml",
"ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 -i snmp_creds.yml",
"ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 --privacy-protocol=AES -i snmp_creds.yml",
"ceph orch apply -i snmp-gateway.yml",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"cp /etc/ceph/ceph.conf / PATH_TO_BACKUP_LOCATION /ceph.conf",
"cp /etc/ceph/ceph.conf /some/backup/location/ceph.conf",
"cp / PATH_TO_BACKUP_LOCATION /ceph.conf /etc/ceph/ceph.conf",
"cp /some/backup/location/ceph.conf /etc/ceph/ceph.conf",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"cp /etc/ceph/ceph.conf / PATH_TO_BACKUP_LOCATION /ceph.conf",
"cp /etc/ceph/ceph.conf /some/backup/location/ceph.conf",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd unset noscrub ceph osd unset nodeep-scrub",
"osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]",
"ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1",
"ceph cephadm get-pub-key > ~/ PATH",
"ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2",
"ssh-copy-id -f -i ~/ceph.pub root@host02",
"ceph orch host add NODE_NAME IP_ADDRESS",
"ceph orch host add host02 10.10.128.70",
"ceph df rados df ceph osd df",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]",
"ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1",
"ceph -s ceph df",
"ceph df rados df ceph osd df",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd crush rm host03",
"ceph -s",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"ceph -s",
"ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.33554 root default -2 0.04779 host host03 0 0.04779 osd.0 up 1.00000 1.00000 -3 0.04779 host host02 1 0.04779 osd.1 up 1.00000 1.00000 -4 0.04779 host host01 2 0.04779 osd.2 up 1.00000 1.00000 -5 0.04779 host host04 3 0.04779 osd.3 up 1.00000 1.00000 -6 0.07219 host host06 4 0.07219 osd.4 up 0.79999 1.00000 -7 0.07219 host host05 5 0.07219 osd.5 up 0.79999 1.00000",
"ceph osd crush add-bucket allDC root ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter ceph osd crush add-bucket DC3 datacenter",
"ceph osd crush move DC1 root=allDC ceph osd crush move DC2 root=allDC ceph osd crush move DC3 root=allDC ceph osd crush move host01 datacenter=DC1 ceph osd crush move host02 datacenter=DC1 ceph osd crush move host03 datacenter=DC2 ceph osd crush move host05 datacenter=DC2 ceph osd crush move host04 datacenter=DC3 ceph osd crush move host06 datacenter=DC3",
"ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -8 6.00000 root allDC -9 2.00000 datacenter DC1 -4 1.00000 host host01 2 1.00000 osd.2 up 1.00000 1.00000 -3 1.00000 host host02 1 1.00000 osd.1 up 1.00000 1.00000 -10 2.00000 datacenter DC2 -2 1.00000 host host03 0 1.00000 osd.0 up 1.00000 1.00000 -7 1.00000 host host05 5 1.00000 osd.5 up 0.79999 1.00000 -11 2.00000 datacenter DC3 -6 1.00000 host host06 4 1.00000 osd.4 up 0.79999 1.00000 -5 1.00000 host host04 3 1.00000 osd.3 up 1.00000 1.00000 -1 0 root default"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/operations_guide/%7Boperation-guide%7D |
Using the AMQ Ruby Client | Using the AMQ Ruby Client Red Hat AMQ 2020.Q4 For Use with AMQ Clients 2.8 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/index |
Chapter 5. Getting Started with Virtual Machine Manager | Chapter 5. Getting Started with Virtual Machine Manager The Virtual Machine Manager, also known as virt-manager , is a graphical tool for creating and managing guest virtual machines. This chapter provides a description of the Virtual Machine Manager and how to run it. Note You can only run the Virtual Machine Manager on a system that has a graphical interface. For more detailed information about using the Virtual Machine Manager, see the other Red Hat Enterprise Linux virtualization guides . 5.1. Running Virtual Machine Manager To run the Virtual Machine Manager, select it in the list of applications or use the following command: The Virtual Machine Manager opens to the main window. Figure 5.1. The Virtual Machine Manager Note If running virt-manager fails, ensure that the virt-manager package is installed. For information on installing the virt-manager package, see Installing the Virtualization Packages in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. | [
"virt-manager"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/chap-Virtualization_Manager-Introduction |
Chapter 1. Data Grid Operator | Chapter 1. Data Grid Operator Data Grid Operator provides operational intelligence and reduces management complexity for deploying Data Grid on Kubernetes and Red Hat OpenShift. 1.1. Data Grid Operator deployments When you install Data Grid Operator, it extends the Kubernetes API with Custom Resource Definitions (CRDs) for deploying and managing Data Grid clusters on Red Hat OpenShift. To interact with Data Grid Operator, OpenShift users apply Custom Resources (CRs) through the OpenShift Web Console or oc client. Data Grid Operator listens for Infinispan CRs and automatically provisions native resources, such as StatefulSets and Secrets, that your Data Grid deployment requires. Data Grid Operator also configures Data Grid services according to the specifications in Infinispan CRs, including the number of pods for the cluster and backup locations for cross-site replication. Figure 1.1. Custom resources 1.2. Cluster management A single Data Grid Operator installation can manage multiple clusters with different Data Grid versions in separate namespaces. Each time a user applies CRs to modify the deployment, Data Grid Operator applies the changes globally to all Data Grid clusters. Figure 1.2. Operator-managed clusters 1.3. Resource reconciliation Data Grid Operator reconciles custom resources such as the Cache CR with resources on your Data Grid cluster. Bidirectional reconciliation synchronizes your CRs with changes that you make to Data Grid resources through the Data Grid Console, command line interface (CLI), or other client application and vice versa. For example if you create a cache through the Data Grid Console then Data Grid Operator adds a declarative Kubernetes representation. To perform reconciliation Data Grid Operator creates a listener pod for each Data Grid cluster that detects modifications for Infinispan resources. Notes about reconciliation When you create a cache through the Data Grid Console, CLI, or other client application, Data Grid Operator creates a corresponding Cache CR with a unique name that conforms to the Kubernetes naming policy. Declarative Kubernetes representations of Data Grid resources that Data Grid Operator creates with the listener pod are linked to Infinispan CRs. Deleting Infinispan CRs removes any associated resource declarations. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/operator |
Chapter 6. Reference | Chapter 6. Reference The following topic are related to the configuration, creation, and management of Insights remediation playbooks. 6.1. Installing the Insights client on Satellite Server content hosts The Insights client comes preinstalled on most versions of Red Hat Enterprise Linux; however, if you have to install it, use this procedure to install the Insights client on each system. Prerequisites Register your hosts to Satellite If you already have Red Hat Enterprise Linux hosts, you can use the Global Registration Template to register them to Satellite. For more information, see Registering hosts to Satellite . Procedure Install the Insights for Red Hat Enterprise Linux client: # yum install insights-client Register the host to Insights for Red Hat Enterprise Linux: # insights-client --register Repeat these steps on each host. Alternatively, you can use the RedHatInsights.insights-client Ansible role to install the Insights client and register the hosts. For more information, see Using Red Hat Insights with Hosts in Satellite in the Red Hat Satellite Managing Hosts guide. 6.2. Configuring Cloud Connector after upgrading Satellite Server 6.10 to 6.11 Note This only applies to upgrades from Satellite version 6.10 to 6.11. Refer to the Upgrading and Updating Red Hat Satellite guide for more information. To configure Cloud Connector after upgrading the Satellite Server, click Configure Cloud Connector button from Configure > RH Cloud - Inventory Upload to enable it on the new version of Satellite Server. Simultaneously, you are required to remove the source from the cloud manually on the Red Hat Hybrid Cloud Console after upgrading your Satellite Server. Once the Cloud Connector is configured, it will remove the receptor bits and install the RHC bits. At the same time, the Cloud Connector announces all the organizations in the Satellite to the source and is ready to receive the connections. 6.3. Disabling direct remediations on a Satellite Server content host By default the parameter is not set on each host. It is True for the hostgroup to allow the execution of playbooks by default on the Cloud Connector. Note that all the hosts that are present in that particular organization inherit the same parameters. When the Satellite receives the remediation playbook run request from Cloud Connector, that request has a list of hosts where it should execute. Complete the following step to ensure the playbook run does not get invoked from the cloud on a single host. Procedure Go to Hosts menu > All Hosts in the Satellite web UI. Locate the host and click the Edit button > Parameters tab and set the enable_cloud_remediations parameter to False on that host. 6.4. Disabling direct remediation on a Satellite Server content host group By default the parameter is not set in the system . It is True for the host group to allow the execution of playbooks by default with the Cloud Connector. Note All the hosts that are present in that particular organization will inherit the same parameters. Optionally, an Organization Administrator can disable the cloud remediations for the whole organization or host group. To disable remediations, change the Global Parameter in the Red Hat Satellite User Interface. Use the following steps to make this edit. Procedure Navigate to the Satellite Dashboard . Click Configure on the left navigation. Click Global Parameters . Click Create Parameter . In the Name field, enter enable_cloud_remediations. In the Value field, enter false . Click Submit . Verification step Find your new parameter listed in the Global Parameters table. | [
"yum install insights-client",
"insights-client --register"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide/remediations-guide-references_red-hat-insights-remediation-guide |
Chapter 14. OpenShift by Red Hat | Chapter 14. OpenShift by Red Hat OpenShift by Red Hat is a Platform as a Service (PaaS) that enables developers to build and deploy web applications. OpenShift provides a wide selection of programming languages and frameworks including Java, Ruby, and PHP. It also provides integrated developer tools to support the application life cycle, including Eclipse integration, JBoss Developer Studio, and Jenkins. OpenShift uses an open source ecosystem to provide a platform for mobile applications, database services, and more. [13] 14.1. OpenShift and SELinux SELinux provides better security control over applications that use OpenShift because all processes are labeled according to the SELinux policy. Therefore, SELinux protects OpenShift from possible malicious attacks within different gears running on the same node. See the Dan Walsh's presentation for more information about SELinux and OpenShift. [13] To learn more about OpenShift, see OpenShift Enterprise documentation and OpenShift Online documentation . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/chap-managing_confined_services-openshift |
Chapter 6. Updating Satellite Server and Capsule Server | Chapter 6. Updating Satellite Server and Capsule Server Use this chapter to update your existing Satellite Server and Capsule Server to a new patch version, for example, from 6.11.0 to 6.11.1. Updates patch security vulnerabilities and minor issues discovered after code is released, and are often fast and non-disruptive to your operating environment. Before updating, back up your Satellite Server and all Capsule Servers. For more information, see Backing Up Satellite Server and Capsule Server in the Administering Red Hat Satellite guide. 6.1. Updating Satellite Server Prerequisites Ensure that you have synchronized Satellite Server repositories for Satellite, Capsule, and Satellite Client 6. Ensure each external Capsule and Content Host can be updated by promoting the updated repositories to all relevant Content Views. Warning If you customize configuration files, manually or use a tool such as Hiera, these customizations are overwritten when the installation script runs during upgrading or updating. You can use the --noop option with the satellite-installer script to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade . Updating Satellite Server to the Minor Version To Update Satellite Server: Ensure the Satellite Maintenance repository is enabled: For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 7: Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready to upgrade. On first use of this command, satellite-maintain prompts you to enter the hammer admin user credentials and saves them in the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process was completed successfully. Perform the upgrade: Check when the kernel packages were last updated: Optional: If a kernel update occurred since the last reboot, stop Satellite services and reboot the system: 6.2. Updating Disconnected Satellite Server This section describes the steps needed to update in an Air-gapped Disconnected setup where the connected Satellite Server (which synchronizes content from CDN) is air gapped from a disconnected Satellite Server. 6.2.1. Updating Disconnected Satellite Server on Red Hat Enterprise Linux 8 Complete the following steps on the connected Satellite Server for Red Hat Enterprise Linux 8. Ensure that you have synchronized the following repositories in your connected Satellite Server. Download the debug certificate of the organization and store it locally at, for example, /etc/pki/katello/certs/org-debug-cert.pem or a location of your choosing. For more information, see Creating an Organization Debug Certificate in Managing Content . Create a Yum configuration file under /etc/yum.repos.d , such as satellite-disconnected .repo , with the following contents: In the configuration file, replace /etc/pki/katello/certs/org-debug-cert.pem in sslclientcert and sslclientkey with the location of the downloaded organization debug certificate. Update satellite.example.com with the correct FQDN for your deployment. Replace My_Organization with the correct organization label in the baseurl . To obtain the organization label, enter the command: Enter the reposync command: On Satellite Server running Red Hat Enterprise Linux 7: On Satellite Server running Red Hat Enterprise Linux 8: This downloads the contents of the repositories from the connected Satellite Server and stores them in the ~/Satellite-repos directory. Verify that the RPMs have been downloaded and the repository data directory is generated in each of the sub-directories of ~/Satellite-repos . Archive the contents of the directory Use the generated Satellite-repos.tgz file to upgrade in the disconnected Satellite Server. Perform the following steps on the disconnected Satellite Server: Copy the generated Satellite-repos.tgz file to your disconnected Satellite Server Extract the archive to anywhere accessible by the root user. In the following example /root is the extraction location. Create a Yum configuration file under /etc/yum.repos.d , such as satellite-disconnected .repo , with the following contents: In the configuration file, replace the /root/Satellite-repos with the extracted location. Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready to upgrade. On first use of this command, satellite-maintain prompts you to enter the hammer admin user credentials and saves them in the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process was completed successfully. Perform the upgrade: Check when the kernel packages were last updated: Optional: If a kernel update occurred since the last reboot, stop Satellite services and reboot the system: 6.2.2. Updating Disconnected Satellite Server on Red Hat Enterprise Linux 7 Complete the following steps on the connected Satellite Server for Red Hat Enterprise Linux 7. Ensure that you have synchronized the following repositories in your connected Satellite Server. Download the debug certificate of the organization and store it locally at, for example, /etc/pki/katello/certs/org-debug-cert.pem or a location of your choosing. Create a Yum configuration file under /etc/yum.repos.d with the following repository information: In the configuration file, replace /etc/pki/katello/certs/org-debug-cert.pem in sslclientcert and sslclientkey with the location of the downloaded organization debug certificate. Update satellite.example.com with correct FQDN for your deployment. Replace My_Organization with the correct organization label in the baseurl . To obtain the organization label, enter the command: Enter the reposync command: This downloads the contents of the repositories from the connected Satellite Server and stores them in the directory ~/Satellite-repos . The reposync command in Red Hat Enterprise Linux 7 downloads the RPMs but not the Yum metadata. Because of this, you must manually run createrepo in each sub-directory of Satellite-repos . Make sure you have the createrepo rpm installed. If not use the following command to install it. Run the following command to create repodata in each sub-directory of ~/Satellite-repos . : Verify that the RPMs have been downloaded and the repository data directory is generated in each of the sub-directories of ~/Satellite-repos . Archive the contents of the directory Use the generated Satellite-repos.tgz file to upgrade in the disconnected Satellite Server. Perform the following steps on the disconnected Satellite Server Copy the generated Satellite-repos.tgz file to your disconnected Satellite Server Extract the archive to anywhere accessible by the root user. In the following example /root is the extraction location. Create a Yum configuration file under /etc/yum.repos.d with the following repository information: In the configuration file, replace the /root/Satellite-repos with the extracted location. Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready for upgrade. On first use of this command, satellite-maintain prompts you to enter the hammer admin user credentials and saves them in the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: Check when the kernel packages were last updated: Optional: If a kernel update occurred since the last reboot, stop Satellite services and reboot the system: 6.3. Updating Capsule Server Use this procedure to update Capsule Servers to the minor version. Procedure Ensure that the Satellite Maintenance repository is enabled: Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. Perform the upgrade: Check when the kernel packages were last updated: Optional: If a kernel update occurred since the last reboot, stop Satellite services and reboot the system: | [
"subscription-manager repos --enable satellite-maintenance-6.11-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable rhel-7-server-satellite-maintenance-6.11-rpms",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.11. z",
"satellite-maintain upgrade run --target-version 6.11. z",
"rpm -qa --last | grep kernel",
"satellite-maintain service stop reboot",
"rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms satellite-6.11-for-rhel-8-x86_64-rpms satellite-maintenance-6.11-for-rhel-8-x86_64-rpms",
"[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel8/8/x86_64/baseos/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel8/8/x86_64/appstream/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-6.11-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6.11 for RHEL 8 RPMs x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/layered/rhel8/x86_64/satellite/6.11/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt [satellite-maintenance-6.11-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6.11 for RHEL 8 RPMs x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/layered/rhel8/x86_64/sat-maintenance/6.11/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1",
"hammer organization list",
"reposync --delete --download-metadata -p ~/Satellite-repos -n -r rhel-8-for-x86_64-baseos-rpms -r rhel-8-for-x86_64-appstream-rpms -r satellite-6.11-for-rhel-8-x86_64-rpms -r satellite-maintenance-6.11-for-rhel-8-x86_64-rpms",
"reposync --delete --download-metadata -p ~/Satellite-repos -n --repoid rhel-8-for-x86_64-baseos-rpms --repoid rhel-8-for-x86_64-appstream-rpms --repoid satellite-6.11-for-rhel-8-x86_64-rpms --repoid {RepoRHEL8ServerSatelliteMaintenanceProductVersion",
"cd ~ tar czf Satellite-repos.tgz Satellite-repos",
"cd /root tar zxf Satellite-repos.tgz",
"[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-baseos-rpms enabled=1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-appstream-rpms enabled=1 [satellite-6.11-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-6.11-for-rhel-8-x86_64-rpms enabled=1 [satellite-maintenance-6.11-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-maintenance-6.11-for-rhel-8-x86_64-rpms enabled=1",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --whitelist=\"check-upstream-repository,repositories-validate\" --target-version 6.11. z",
"satellite-maintain upgrade run --whitelist=\"check-upstream-repository,repositories-validate\" --target-version 6.11. z",
"rpm -qa --last | grep kernel",
"satellite-maintain service stop reboot",
"rhel-7-server-ansible-2.9-rpms rhel-7-server-rpms rhel-7-server-satellite-6.11-rpms rhel-7-server-satellite-maintenance-6.11-rpms rhel-server-rhscl-7-rpms",
"[rhel-7-server-ansible-2.9-rpms] name=Ansible 2.9 RPMs for Red Hat Enterprise Linux 7 Server x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel/server/7/USDreleasever/USDbasearch/ansible/2.9/os/ enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-7-server-rpms] name=Red Hat Enterprise Linux 7 Server RPMs x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel/server/7/7Server/x86_64/os/ enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-7-server-satellite-6.11-rpms] name=Red Hat Satellite 6 for RHEL 7 Server RPMs x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel/server/7/7Server/x86_64/satellite/6.11/os/ enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt [rhel-7-server-satellite-maintenance-6.11-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 7 Server RPMs x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel/server/7/7Server/x86_64/sat-maintenance/6/os/ enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-server-rhscl-7-rpms] name=Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server x86_64 baseurl=https://satellite.example.com/pulp/content/My_Organization/Library/content/dist/rhel/server/7/7Server/x86_64/rhscl/1/os/ enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1",
"hammer organization list",
"reposync --delete --download-metadata -p ~/Satellite-repos -n -r rhel-7-server-ansible-2.9-rpms -r rhel-7-server-rpms -r rhel-7-server-satellite-6.11-rpms -r rhel-7-server-satellite-maintenance-6.11-rpms -r rhel-server-rhscl-7-rpms",
"satellite-maintain packages install createrepo",
"cd ~/Satellite-repos for directory in */ do echo \"Processing USDdirectory\" cd USDdirectory createrepo . cd .. done",
"cd ~ tar czf Satellite-repos.tgz Satellite-repos",
"cd /root tar zxf Satellite-repos.tgz",
"[rhel-7-server-ansible-2.9-rpms] name=Ansible 2.9 RPMs for Red Hat Enterprise Linux 7 Server x86_64 baseurl=file:///root/Satellite-repos/rhel-7-server-ansible-2.9-rpms enabled=1 [rhel-7-server-rpms] name=Red Hat Enterprise Linux 7 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/rhel-7-server-rpms enabled=1 [rhel-7-server-satellite-6.11-rpms] name=Red Hat Satellite 6 for RHEL 7 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/rhel-7-server-satellite-6.11-rpms enabled=1 [rhel-7-server-satellite-maintenance-6.11-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 7 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/rhel-7-server-satellite-maintenance-6.11-rpms enabled=1 [rhel-server-rhscl-7-rpms] name=Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server x86_64 baseurl=file:///root/Satellite-repos/rhel-server-rhscl-7-rpms enabled=1",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --whitelist=\"check-upstream-repository,repositories-validate\" --target-version 6.11. z",
"satellite-maintain upgrade run --whitelist=\"check-upstream-repository,repositories-setup,repositories-validate\" --target-version 6.11. z",
"rpm -qa --last | grep kernel",
"satellite-maintain service stop reboot",
"subscription-manager repos --enable rhel-7-server-satellite-maintenance-6.11-rpms",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.11. z",
"satellite-maintain upgrade run --target-version 6.11. z",
"rpm -qa --last | grep kernel",
"satellite-maintain service stop reboot"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/upgrading_and_updating_red_hat_satellite/updating_satellite |
Chapter 5. Load balancing traffic with HAProxy | Chapter 5. Load balancing traffic with HAProxy The HAProxy service provides load balancing of traffic to Controller nodes in the high availability cluster, as well as logging and sample configurations. The haproxy package contains the haproxy daemon, which corresponds to the systemd service of the same name. Pacemaker manages the HAProxy service as a highly available service called haproxy-bundle . 5.1. How HAProxy works Director can configure most Red Hat OpenStack Platform services to use the HAProxy service. Director configures those services in the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file, which instructs HAProxy to run in a dedicated container on each overcloud node. The following table shows the list of services that HAProxy manages: Table 5.1. Services managed by HAProxy aodh cinder glance_api gnocchi haproxy.stats heat_api heat_cfn horizon keystone_admin keystone_public mysql neutron nova_metadata nova_novncproxy nova_osapi nova_placement For each service in the haproxy.cfg file, you can see the following properties: listen : The name of the service that is listening for requests. bind : The IP address and TCP port number on which the service is listening. server : The name of each Controller node server that uses HAProxy, the IP address and listening port, and additional information about the server. The following example shows the OpenStack Block Storage (cinder) service configuration in the haproxy.cfg file: This example output shows the following information about the OpenStack Block Storage (cinder) service: 172.16.0.10:8776 : Virtual IP address and port on the Internal API network (VLAN201) to use within the overcloud. 192.168.1.150:8776 : Virtual IP address and port on the External network (VLAN100) that provides access to the API network from outside the overcloud. 8776 : Port number on which the OpenStack Block Storage (cinder) service is listening. server : Controller node names and IP addresses. HAProxy can direct requests made to those IP addresses to one of the Controller nodes listed in the server output. httpchk : Enables health checks on the Controller node servers. fall 5 : Number of failed health checks to determine that the service is offline. inter 2000 : Interval between two consecutive health checks in milliseconds. rise 2 : Number of successful health checks to determine that the service is running. For more information about settings you can use in the haproxy.cfg file, see the /usr/share/doc/haproxy-[VERSION]/configuration.txt file on any node where the haproxy package is installed. 5.2. Viewing HAProxy Stats By default, the director also enables HAProxy Stats, or statistics, on all HA deployments. With this feature, you can view detailed information about data transfer, connections, and server states on the HAProxy Stats page. The director also sets the IP:Port address that you use to reach the HAProxy Stats page and stores the information in the haproxy.cfg file. Prerequisites High availability is deployed and running. Procedure Open the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file in any Controller node where HAProxy is installed. Locate the listen haproxy.stats section: In a Web browser, navigate to 10.200.0.6:1993 and enter the credentials from the stats auth row to view the HAProxy Stats page. 5.3. Additional resources HAProxy 1.8 documentation How can I verify my haproxy.cfg is correctly configured to load balance openstack services? | [
"listen cinder bind 172.16.0.10:8776 bind 192.168.1.150:8776 mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0 172.16.0.13:8777 check fall 5 inter 2000 rise 2 server overcloud-controller-1 172.16.0.14:8777 check fall 5 inter 2000 rise 2 server overcloud-controller-2 172.16.0.15:8777 check fall 5 inter 2000 rise 2",
"listen haproxy.stats bind 10.200.0.6:1993 mode http stats enable stats uri / stats auth admin:<haproxy-stats-password>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_deployment_and_usage/assembly_load-balacing-traffic-with-haproxy_rhosp |
Chapter 2. Start here with OpenShift Virtualization | Chapter 2. Start here with OpenShift Virtualization Use the following tables to find content to help you learn about and use OpenShift Virtualization. 2.1. Cluster administrator Learn Plan Deploy Additional resources Learn about OpenShift Virtualization Configuring your cluster for OpenShift Virtualization Updating your node network configuration Getting Support Learn more about OpenShift Container Platform Plan storage for virtual machine disks Configuring CSI volumes Learn about virtual machine live migration Installing OpenShift Virtualization using the OpenShift Virtualization console or CLI Learn about node maintenance 2.2. Virtualization administrator Learn Deploy Manage Use Learn about OpenShift Virtualization Connecting virtual machines to the default pod network for virtual machines and external networks Enabling the virtctl client Importing virtual machines with the Migration Toolkit for containers Learn about storage features for virtual machine disks Customizing the storage profile Using the CLI tools Using live migration Creating boot sources and attaching them to templates Viewing logs and events Updating boot source templates Monitoring virtual machine health 2.3. Virtual machine administrator / developer Learn Use Manage Additional resources Learn about OpenShift Virtualization Enabling the virtctl client Viewing logs and events Getting Support Creating virtual machines Monitoring virtual machine health Managing virtual machines instances Creating and managing virtual machine snapshots Controlling virtual machine states Accessing the virtual machine consoles Pass configuration data to virtual machines using secrets, configuration maps, and service accounts | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/start-here-with-openshift-virtualization |
Chapter 5. Installing Ansible Automation Platform Operator from the OpenShift Container Platform CLI | Chapter 5. Installing Ansible Automation Platform Operator from the OpenShift Container Platform CLI Use these instructions to install the Ansible Automation Platform Operator on Red Hat OpenShift Container Platform from the OpenShift Container Platform command-line interface (CLI) using the oc command. 5.1. Prerequisites Access to Red Hat OpenShift Container Platform using an account with operator installation permissions. The OpenShift Container Platform CLI oc command is installed on your local system. Refer to Installing the OpenShift CLI in the Red Hat OpenShift Container Platform product documentation for further information. 5.2. Subscribing a namespace to an operator using the OpenShift Container Platform CLI Create a project for the operator oc new-project ansible-automation-platform Create a file called sub.yaml . Add the following YAML code to the sub.yaml file. --- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.3' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: example namespace: ansible-automation-platform spec: replicas: 1 This file creates a Subscription object called ansible-automation-platform that subscribes the ansible-automation-platform namespace to the ansible-automation-platform-operator operator. It then creates an AutomationController object called example in the ansible-automation-platform namespace. To change the automation controller name from example , edit the name field in the kind: AutomationController section of sub.yaml and replace <automation_controller_name> with the name you want to use: apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: <automation_controller_name> namespace: ansible-automation-platform Run the oc apply command to create the objects specified in the sub.yaml file: oc apply -f sub.yaml To verify that the namespace has been successfully subscribed to the ansible-automation-platform-operator operator, run the oc get subs command: USD oc get subs -n ansible-automation-platform For further information about subscribing namespaces to operators, see Installing from OperatorHub using the CLI in the Red Hat OpenShift Container Platform Operators guide. You can use the OpenShift Container Platform CLI to fetch the web address and the password of the Automation controller that you created. 5.3. Fetching Automation controller login details from the OpenShift Container Platform CLI To login to the Automation controller, you need the web address and the password. 5.3.1. Fetching the automation controller web address A Red Hat OpenShift Container Platform route exposes a service at a host name, so that external clients can reach it by name. When you created the automation controller instance, a route was created for it. The route inherits the name that you assigned to the automation controller object in the YAML file. Use the following command to fetch the routes: oc get routes -n <controller_namespace> In the following example, the example automation controller is running in the ansible-automation-platform namespace. USD oc get routes -n ansible-automation-platform NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None The address for the automation controller instance is example-ansible-automation-platform.apps-crc.testing . 5.3.2. Fetching the automation controller password The YAML block for the automation controller instance in sub.yaml assigns values to the name and admin_user keys. Use these values in the following command to fetch the password for the automation controller instance. oc get secret/<controller_name>-<admin_user>-password -o yaml The default value for admin_user is admin . Modify the command if you changed the admin username in sub.yaml . The following example retrieves the password for an automation controller object called example : oc get secret/example-admin-password -o yaml The password for the automation controller instance is listed in the metadata field in the output: USD oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"Secret","metadata":{"labels":{"app.kubernetes.io/component":"automationcontroller","app.kubernetes.io/managed-by":"automationcontroller-operator","app.kubernetes.io/name":"example","app.kubernetes.io/operator-version":"","app.kubernetes.io/part-of":"example"},"name":"example-admin-password","namespace":"ansible-automation-platform"},"stringData":{"password":"88TG88TG88TG88TG88TG88TG88TG88TG"}}' creationTimestamp: "2021-11-03T00:02:24Z" labels: app.kubernetes.io/component: automationcontroller app.kubernetes.io/managed-by: automationcontroller-operator app.kubernetes.io/name: example app.kubernetes.io/operator-version: "" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform resourceVersion: "185185" uid: 39393939-5252-4242-b929-665f665f665f For this example, the password is 88TG88TG88TG88TG88TG88TG88TG88TG . 5.4. Additional resources For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide. | [
"new-project ansible-automation-platform",
"--- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.3' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: example namespace: ansible-automation-platform spec: replicas: 1",
"apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: <automation_controller_name> namespace: ansible-automation-platform",
"apply -f sub.yaml",
"oc get subs -n ansible-automation-platform",
"get routes -n <controller_namespace>",
"oc get routes -n ansible-automation-platform NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None",
"get secret/<controller_name>-<admin_user>-password -o yaml",
"get secret/example-admin-password -o yaml",
"oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"automationcontroller\",\"app.kubernetes.io/managed-by\":\"automationcontroller-operator\",\"app.kubernetes.io/name\":\"example\",\"app.kubernetes.io/operator-version\":\"\",\"app.kubernetes.io/part-of\":\"example\"},\"name\":\"example-admin-password\",\"namespace\":\"ansible-automation-platform\"},\"stringData\":{\"password\":\"88TG88TG88TG88TG88TG88TG88TG88TG\"}}' creationTimestamp: \"2021-11-03T00:02:24Z\" labels: app.kubernetes.io/component: automationcontroller app.kubernetes.io/managed-by: automationcontroller-operator app.kubernetes.io/name: example app.kubernetes.io/operator-version: \"\" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform resourceVersion: \"185185\" uid: 39393939-5252-4242-b929-665f665f665f"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/installing-aap-operator-cli |
6.3. Configuration Suggestions | 6.3. Configuration Suggestions Red Hat Enterprise Linux provides a number of tools to assist administrators in configuring the system. This section outlines the available tools and provides examples of how they can be used to solve processor related performance problems in Red Hat Enterprise Linux 7. 6.3.1. Configuring Kernel Tick Time By default, Red Hat Enterprise Linux 7 uses a tickless kernel, which does not interrupt idle CPUs in order to reduce power usage and allow newer processors to take advantage of deep sleep states. Red Hat Enterprise Linux 7 also offers a dynamic tickless option (disabled by default), which is useful for very latency-sensitive workloads, such as high performance computing or realtime computing. To enable dynamic tickless behavior in certain cores, specify those cores on the kernel command line with the nohz_full parameter. On a 16 core system, specifying nohz_full=1-15 enables dynamic tickless behavior on cores 1 through 15, moving all timekeeping to the only unspecified core (core 0). This behavior can be enabled either temporarily at boot time, or persistently via the GRUB_CMDLINE_LINUX option in the /etc/default/grub file. For persistent behavior, run the grub2-mkconfig -o /boot/grub2/grub.cfg command to save your configuration. Enabling dynamic tickless behavior does require some manual administration. When the system boots, you must manually move rcu threads to the non-latency-sensitive core, in this case core 0. Use the isolcpus parameter on the kernel command line to isolate certain cores from user-space tasks. Optionally, set CPU affinity for the kernel's write-back bdi-flush threads to the housekeeping core: Verify that the dynamic tickless configuration is working correctly by executing the following command, where stress is a program that spins on the CPU for 1 second. One possible replacement for stress is a script that runs something like while :; do d=1; done . The default kernel timer configuration shows 1000 ticks on a busy CPU: With the dynamic tickless kernel configured, you should see 1 tick instead: 6.3.2. Setting Hardware Performance Policy (x86_energy_perf_policy) The x86_energy_perf_policy tool allows administrators to define the relative importance of performance and energy efficiency. This information can then be used to influence processors that support this feature when they select options that trade off between performance and energy efficiency. By default, it operates on all processors in performance mode. It requires processor support, which is indicated by the presence of CPUID.06H.ECX.bit3 , and must be run with root privileges. x86_energy_perf_policy is provided by the kernel-tools package. For details of how to use x86_energy_perf_policy , see Section A.9, "x86_energy_perf_policy" or refer to the man page: 6.3.3. Setting Process Affinity with taskset The taskset tool is provided by the util-linux package. Taskset allows administrators to retrieve and set the processor affinity of a running process, or launch a process with a specified processor affinity. Important taskset does not guarantee local memory allocation. If you require the additional performance benefits of local memory allocation, Red Hat recommends using numactl instead of taskset . For more information about taskset , see Section A.15, "taskset" or the man page: 6.3.4. Managing NUMA Affinity with numactl Administrators can use numactl to run a process with a specified scheduling or memory placement policy. Numactl can also set a persistent policy for shared memory segments or files, and set the processor affinity and memory affinity of a process. In a system with NUMA topology, a processor's memory access slows as the distance between the processor and the memory bank increases. Therefore, it is important to configure applications that are sensitive to performance so that they allocate memory from the closest possible memory bank. It is best to use memory and CPUs that are in the same NUMA node. Multi-threaded applications that are sensitive to performance may benefit from being configured to execute on a specific NUMA node rather than a specific processor. Whether this is suitable depends on your system and the requirements of your application. If multiple application threads access the same cached data, then configuring those threads to execute on the same processor may be suitable. However, if multiple threads that access and cache different data execute on the same processor, each thread may evict cached data accessed by a thread. This means that each thread 'misses' the cache, and wastes execution time fetching data from memory and replacing it in the cache. You can use the perf tool, as documented in Section A.6, "perf" , to check for an excessive number of cache misses. Numactl provides a number of options to assist you in managing processor and memory affinity. See Section A.11, "numastat" or the man page for details: Note The numactl package includes the libnuma library. This library offers a simple programming interface to the NUMA policy supported by the kernel, and can be used for more fine-grained tuning than the numactl application. For more information, see the man page: 6.3.5. Automatic NUMA Affinity Management with numad numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system in order to dynamically improve NUMA resource allocation and management. numad also provides a pre-placement advice service that can be queried by various job management systems to provide assistance with the initial binding of CPU and memory resources for their processes. This pre-placement advice is available regardless of whether numad is running as an executable or a service. For details of how to use numad , see Section A.13, "numad" or refer to the man page: 6.3.6. Tuning Scheduling Policy The Linux scheduler implements a number of scheduling policies, which determine where and for how long a thread runs. There are two major categories of scheduling policies: normal policies and realtime policies. Normal threads are used for tasks of normal priority. Realtime policies are used for time-sensitive tasks that must complete without interruptions. Realtime threads are not subject to time slicing. This means they will run until they block, exit, voluntarily yield, or are pre-empted by a higher priority thread. The lowest priority realtime thread is scheduled before any thread with a normal policy. 6.3.6.1. Scheduling Policies 6.3.6.1.1. Static Priority Scheduling with SCHED_FIFO SCHED_FIFO (also called static priority scheduling) is a realtime policy that defines a fixed priority for each thread. This policy allows administrators to improve event response time and reduce latency, and is recommended for time sensitive tasks that do not run for an extended period of time. When SCHED_FIFO is in use, the scheduler scans the list of all SCHED_FIFO threads in priority order and schedules the highest priority thread that is ready to run. The priority level of a SCHED_FIFO thread can be any integer from 1 to 99, with 99 treated as the highest priority. Red Hat recommends starting at a low number and increasing priority only when you identify latency issues. Warning Because realtime threads are not subject to time slicing, Red Hat does not recommend setting a priority of 99. This places your process at the same priority level as migration and watchdog threads; if your thread goes into a computational loop and these threads are blocked, they will not be able to run. Systems with a single processor will eventually hang in this situation. Administrators can limit SCHED_FIFO bandwidth to prevent realtime application programmers from initiating realtime tasks that monopolize the processor. /proc/sys/kernel/sched_rt_period_us This parameter defines the time period in microseconds that is considered to be one hundred percent of processor bandwidth. The default value is 1000000 ms, or 1 second. /proc/sys/kernel/sched_rt_runtime_us This parameter defines the time period in microseconds that is devoted to running realtime threads. The default value is 950000 ms, or 0.95 seconds. 6.3.6.1.2. Round Robin Priority Scheduling with SCHED_RR SCHED_RR is a round-robin variant of SCHED_FIFO . This policy is useful when multiple threads need to run at the same priority level. Like SCHED_FIFO , SCHED_RR is a realtime policy that defines a fixed priority for each thread. The scheduler scans the list of all SCHED_RR threads in priority order and schedules the highest priority thread that is ready to run. However, unlike SCHED_FIFO , threads that have the same priority are scheduled round-robin style within a certain time slice. You can set the value of this time slice in milliseconds with the sched_rr_timeslice_ms kernel parameter ( /proc/sys/kernel/sched_rr_timeslice_ms ). The lowest value is 1 millisecond. 6.3.6.1.3. Normal Scheduling with SCHED_OTHER SCHED_OTHER is the default scheduling policy in Red Hat Enterprise Linux 7. This policy uses the Completely Fair Scheduler (CFS) to allow fair processor access to all threads scheduled with this policy. This policy is most useful when there are a large number of threads or data throughput is a priority, as it allows more efficient scheduling of threads over time. When this policy is in use, the scheduler creates a dynamic priority list based partly on the niceness value of each process thread. Administrators can change the niceness value of a process, but cannot change the scheduler's dynamic priority list directly. For details about changing process niceness, see the Red Hat Enterprise Linux 7 System Administrator's Guide . 6.3.6.2. Isolating CPUs You can isolate one or more CPUs from the scheduler with the isolcpus boot parameter. This prevents the scheduler from scheduling any user-space threads on this CPU. Once a CPU is isolated, you must manually assign processes to the isolated CPU, either with the CPU affinity system calls or the numactl command. To isolate the third and sixth to eighth CPUs on your system, add the following to the kernel command line: You can also use the Tuna tool to isolate a CPU. Tuna can isolate a CPU at any time, not just at boot time. However, this method of isolation is subtly different from the isolcpus parameter, and does not currently achieve the performance gains associated with isolcpus . See Section 6.3.8, "Configuring CPU, Thread, and Interrupt Affinity with Tuna" for more details about this tool. 6.3.7. Setting Interrupt Affinity on AMD64 and Intel 64 Interrupt requests have an associated affinity property, smp_affinity , which defines the processors that will handle the interrupt request. To improve application performance, assign interrupt affinity and process affinity to the same processor, or processors on the same core. This allows the specified interrupt and application threads to share cache lines. Important This section covers only the AMD64 and Intel 64 architecture. Interrupt affinity configuration is significantly different on other architectures. Procedure 6.1. Balancing Interrupts Automatically If your BIOS exports its NUMA topology, the irqbalance service can automatically serve interrupt requests on the node that is local to the hardware requesting service. For details on configuring irqbalance , see Section A.1, "irqbalance" . Procedure 6.2. Balancing Interrupts Manually Check which devices correspond to the interrupt requests that you want to configure. Starting with Red Hat Enterprise Linux 7.5, the system configures the optimal interrupt affinity for certain devices and their drivers automatically. You can no longer configure their affinity manually. This applies to the following devices: Devices using the be2iscsi driver NVMe PCI devices Find the hardware specification for your platform. Check if the chipset on your system supports distributing interrupts. If it does, you can configure interrupt delivery as described in the following steps. Additionally, check which algorithm your chipset uses to balance interrupts. Some BIOSes have options to configure interrupt delivery. If it does not, your chipset will always route all interrupts to a single, static CPU. You cannot configure which CPU is used. Check which Advanced Programmable Interrupt Controller (APIC) mode is in use on your system. Only non-physical flat mode ( flat ) supports distributing interrupts to multiple CPUs. This mode is available only for systems that have up to 8 CPUs. In the command output: If your system uses a mode other than flat , you can see a line similar to Setting APIC routing to physical flat . If you can see no such message, your system uses flat mode. If your system uses x2apic mode, you can disable it by adding the nox2apic option to the kernel command line in the bootloader configuration. Calculate the smp_affinity mask. The smp_affinity value is stored as a hexadecimal bit mask representing all processors in the system. Each bit configures a different CPU. The least significant bit is CPU 0. The default value of the mask is f , meaning that an interrupt request can be handled on any processor in the system. Setting this value to 1 means that only processor 0 can handle the interrupt. Procedure 6.3. Calculating the Mask In binary, use the value 1 for CPUs that will handle the interrupts. For example, to handle interrupts by CPU 0 and CPU 7, use 0000000010000001 as the binary code: Table 6.1. Binary Bits for CPUs CPU 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Binary 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Convert the binary code to hexadecimal. For example, to convert the binary code using Python: On systems with more than 32 processors, you must delimit smp_affinity values for discrete 32 bit groups. For example, if you want only the first 32 processors of a 64 processor system to service an interrupt request, use 0xffffffff,00000000 . Set the smp_affinity mask. The interrupt affinity value for a particular interrupt request is stored in the associated /proc/irq/ irq_number /smp_affinity file. Write the calculated mask to the associated file: Additional Resources On systems that support interrupt steering, modifying the smp_affinity property of an interrupt request sets up the hardware so that the decision to service an interrupt with a particular processor is made at the hardware level with no intervention from the kernel. For more information about interrupt steering, see Chapter 9, Networking . 6.3.8. Configuring CPU, Thread, and Interrupt Affinity with Tuna Tuna is a tool for tuning running processes and can control CPU, thread, and interrupt affinity, and also provides a number of actions for each type of entity it can control. For information about Tuna , see Chapter 4, Tuna . | [
"for i in `pgrep rcu[^c]` ; do taskset -pc 0 USDi ; done",
"echo 1 > /sys/bus/workqueue/devices/writeback/cpumask",
"perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 stress -t 1 -c 1",
"perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 stress -t 1 -c 1 1000 irq_vectors:local_timer_entry",
"perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 stress -t 1 -c 1 1 irq_vectors:local_timer_entry",
"man x86_energy_perf_policy",
"man taskset",
"man numactl",
"man numa",
"man numad",
"isolcpus=2,5-7",
"journalctl --dmesg | grep APIC",
">>> hex(int('0000000010000001', 2)) '0x81'",
"echo mask > /proc/irq/ irq_number /smp_affinity"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-CPU-Configuration_suggestions |
Chapter 2. Release notes | Chapter 2. Release notes 2.1. Red Hat OpenShift support for Windows Containers release notes The release notes for Red Hat OpenShift for Windows Containers tracks the development of the Windows Machine Config Operator (WMCO), which provides all Windows container workload capabilities in OpenShift Container Platform. 2.1.1. Release notes for Red Hat Windows Machine Config Operator 7.2.2 This release of the WMCO provides a security update and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 7.2.2 were released in RHSA-2024:6734 . 2.2. Release notes for past releases of the Windows Machine Config Operator The following release notes are for versions of the Windows Machine Config Operator (WMCO). For the current version, see Red Hat OpenShift support for Windows Containers release notes . 2.2.1. Release notes for Red Hat Windows Machine Config Operator 7.2.1 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 7.2.1 were released in RHBA-2024:1476 . 2.2.1.1. Bug fixes Previously, the WMCO did not properly wait for Windows virtual machines (VMs) to finish rebooting. This led to occasional timing issues where the WMCO would attempt to interact with a node that was in the middle of a reboot, causing WMCO to log an error and restart node configuration. Now, the WMCO waits for the instance to completely reboot. ( OCPBUGS-23036 ) Previously, the WMCO configuration was missing the DeleteEmptyDirData: true field, which is required for draining nodes that have emptyDir volumes attached. As a consequence, customers that had nodes with emptyDir volumes would see the following error in the logs: cannot delete Pods with local storage . With this fix, the DeleteEmptyDirData: true field was added to the node drain helper struct in the WMCO. As a result, customers are able to drain nodes with emptyDir volumes attached. ( OCPBUGS-23081 ) Previously, because of bad logic in the networking configuration script, the WICD was incorrectly reading carriage returns in the CNI configuration file as changes, and identified the file as modified. This caused the CNI configuration to be unnecessarily reloaded, potentially resulting in container restarts and brief network outages. With this fix, the WICD now reloads the CNI configuration only when the CNI configuration is actually modified. ( OCPBUGS-27771 ) Previously, the WMCO incorrectly approved the node certificate signing requests (CSR) for all nodes trying to join a cluster, not just Windows node CSRs. With this fix, the WMCO approves CSRs for only Windows nodes as expected. ( OCPBUGS-27139 ) Previously, because of routing issues present in Windows Server 2019, under certain conditions and after more than one hour of running time, workloads on Windows Server 2019 could have experienced packet loss when communicating with other containers in the cluster. This fix enables Direct Server Return (DSR) routing within kube-proxy. As a result, DSR now causes request and response traffic to use a different network path, circumventing the bug within Windows Server 2019. ( OCPBUGS-28254 ) Previously, because the upgrade path from WMCO 6.x to WMCO 7.x included previously released versions, the WMCO would fail during the upgrade. With this fix, you can successfully upgrade from WMCO 6.x to WMCO 7.x. ( OCPBUGS-27775 ) Previously, because of a lack of synchronization between Windows compute machine set nodes and Bring-Your-Own-Host (BYOH) instances, during an update the compute machine set nodes and the BYOH instances could update simultaneously, which could have impacted running workloads. This fix introduces a locking mechanism so that compute machine set nodes and BYOH instances update individually. ( OCPBUGS-23020 ) 2.2.2. Release notes for Red Hat Windows Machine Config Operator 7.1.0 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 7.1.0 were released in RHSA-2023:4025 . Important Due to a known issue, your OpenShift Container Platform cluster must be on version 4.12.3 or greater before updating the WMCO from version 7.0.1 to version 7.1.0. The update fails if the cluster is lower than version 4.12.3. 2.2.2.1. Bug fixes Previously, the containerd container runtime reported an incorrect version on each Windows node because repository tags were not propagated to the build system. This configuration caused containerd to report its go build version as the version of each Windows node. With this update, the correct version is injected into the binary during build time, so that containerd reports the correct version for each Windows node. ( OCPBUGS-7843 ) Previously, the Windows Machine Config Operator (WMCO) could not drain daemon set workloads. This issue caused Windows daemon set pods to block Windows nodes that the WMCO attempted to remove or update. With this update, the WMCO includes additional role-based access control (RBAC) permissions, so that the WMCO can remove daemon set workloads. The WMCO can also delete any processes that were created with the containerd shim, so that daemon set containers do not exist on a Windows instance after a WMCO removes a node from a cluster. ( OCPBUGS-8056 ) Previously, on an Azure Windows Server 2019 platform that does not have Azure container services installed, WMCO would fail to deploy Windows instances and would display the Install-WindowsFeature : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer error message. The failure occurred because the Microsoft Install-WindowsFeature cmdlet displays a progress bar, which cannot be sent over an SSH connection. This fix hides the progress bar. As a result, Windows instances can be deployed as nodes. ( OCPBUGS-14445 ) 2.2.3. Release notes for Red Hat Windows Machine Config Operator 7.0.1 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 7.0.1 were released in RHBA-2023:0748 . 2.2.3.1. Bug fixes Previously, WMCO 7.0.0 did not support running in a namespace other than openshift-windows-machine-operator . With this fix, you can run WMCO in a custom namespace and can upgrade clusters that have WMCO installed in a custom namespace. ( OCPBUGS-5065 ) 2.2.4. Release notes for Red Hat Windows Machine Config Operator 7.0.0 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 7.0.0 were released in RHSA-2022:9096 . 2.2.4.1. New features and improvements 2.2.4.1.1. Windows Instance Config Daemon (WICD) The Windows Instance Config Daemon (WICD) is now performing many of the tasks that were previously performed by the Windows Machine Config Bootstrapper (WMCB). The WICD is installed on your Windows nodes. Users do not need to interact with the WICD and should not experience any difference in WMCO operation. 2.2.4.1.2. Support for clusters running on Google Cloud Platform You can now run Windows Server 2022 nodes on a cluster installed on Google Cloud Platform (GCP). You can create a Windows MachineSet object on GCP to host Windows Server 2022 compute nodes. For more information, see Creating a Windows MachineSet object on vSphere . 2.2.4.2. Bug fixes Previously, restarting the WMCO in a cluster with running Windows Nodes caused the windows exporter endpoint to be removed. Because of this, each Windows node could not report any metrics data. After this fix, the endpoint is retained when the WMCO is restarted. As a result, metrics data is reported properly after restarting WMCO. ( BZ#2107261 ) Previously, the test to determine if the Windows Defender antivirus service is running was incorrectly checking for any process whose name started with Windows Defender, regardless of state. This resulted in an error when creating firewall exclusions for containerd on instances without Windows Defender installed. This fix now checks for the presence of the specific running process associated with the Windows Defender antivirus service. As a result, the WMCO can properly configure Windows instances as nodes regardless of whether Windows Defender is installed or not. ( OCPBUGS-3573 ) 2.2.4.3. Known issues The following known limitations have been announced after the WMCO release: OpenShift Serverless, Horizontal Pod Autoscaling, and Vertical Pod Autoscaling are not supported on Windows nodes. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. WMCO 7.0.0 does not support running in a namespace other than openshift-windows-machine-operator . If you are using a custom namespace, it is recommended that you not upgrade to WMCO 7.0.0. Instead, you should upgrade to WMCO 7.0.1 when it is released. If your WMCO is configured with the Automatic update approval strategy, you should change it to Manual for WMCO 7.0.0. See the installation instructions for information on changing the approval strategy. Additional resources See the full list of known limitations 2.3. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. 2.3.1. WMCO 7.2.x supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 7.2.0 and 7.2.1, based on the applicable platform. Unlisted Windows Server versions are not supported and attempting to use them will cause errors. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.2. WMCO 7.0 and 7.1 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 7.0.0, 7.0.1, and 7.1.0, based on the applicable platform. Unlisted Windows Server versions are not supported and attempting to use them will cause errors. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 2.4. Windows Machine Config Operator known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes). The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat Insights cost management Red Hat OpenShift Local Dual NIC is not supported on WMCO-managed Windows instances. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States). Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/windows_container_support_for_openshift/release-notes |
Release notes for Red Hat build of OpenJDK 17.0.10 | Release notes for Red Hat build of OpenJDK 17.0.10 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.10/index |
23.10. Assign Storage Devices | 23.10. Assign Storage Devices If you selected more than one storage device on the storage devices selection screen (refer to Section 23.6, "Storage Devices" ), anaconda asks you to select which of these devices should be available for installation of the operating system, and which should only be attached to the file system for data storage. During installation, the devices that you identify here as being for data storage only are mounted as part of the file system, but are not partitioned or formatted. Figure 23.32. Assign storage devices The screen is split into two panes. The left pane contains a list of devices to be used for data storage only. The right pane contains a list of devices that are to be available for installation of the operating system. Each list contains information about the devices to help you to identify them. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Move a device from one list to the other by clicking on the device, then clicking either the button labeled with a left-pointing arrow to move it to the list of data storage devices or the button labeled with a right-pointing arrow to move it to the list of devices available for installation of the operating system. The list of devices available as installation targets also includes a radio button beside each device. On platforms other than System z, this radio button is used to specify the device to which you want to install the boot loader. On System z this choice does not have any effect. The zipl boot loader will be installed on the disk that contains the /boot directory, which is determined later on during partitioning. When you have finished identifying devices to be used for installation, click to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/assign_storage_devices-s390 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_power/providing-feedback-on-red-hat-documentation_ibm-power |
Postinstallation configuration | Postinstallation configuration OpenShift Container Platform 4.17 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/index |
Chapter 1. Building applications overview | Chapter 1. Building applications overview Using OpenShift Dedicated, you can create, edit, delete, and manage applications using the web console or command line interface (CLI). 1.1. Working on a project Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Dedicated. After you create the project, you can grant or revoke access to a project and manage cluster roles for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects. As a user with dedicated administrator permissions, you can choose to prevent an authenticated user group from self-provisioning new projects . 1.2. Working on an application 1.2.1. Creating an application To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console , installed Operators , or the OpenShift CLI ( oc ) . You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog. You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift CLI ( oc ). With the OpenShift Dedicated web console, you can create an application from an Operator installed by a cluster administrator. 1.2.2. Maintaining an application After you create the application, you can use the web console to monitor your project or application metrics . You can also edit or delete the application using the web console. When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption. 1.2.3. Deploying an application You can deploy your application using Deployment or DeploymentConfig objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application. You can also use Helm , a software package manager that simplifies deployment of applications and services to OpenShift Dedicated clusters. 1.3. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises. | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/building-applications-overview |
Preface | Preface Once you have deployed a Red Hat Quay registry, there are many ways you can further configure and manage that deployment. Topics covered here include: Advanced Red Hat Quay configuration Setting notifications to alert you of a new Red Hat Quay release Securing connections with SSL/TLS certificates Directing action logs storage to Elasticsearch Configuring image security scanning with Clair Scan pod images with the Container Security Operator Integrate Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator Mirroring images with repository mirroring Sharing Red Hat Quay images with a BitTorrent service Authenticating users with LDAP Enabling Quay for Prometheus and Grafana metrics Setting up geo-replication Troubleshooting Red Hat Quay For a complete list of Red Hat Quay configuration fields, see the Configure Red Hat Quay page. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/pr01 |
Chapter 6. Using Red Hat Single Sign-On Operator with automation hub | Chapter 6. Using Red Hat Single Sign-On Operator with automation hub Private automation hub uses Red Hat Single Sign-On for authentication. The Red Hat Single Sign-On Operator creates and manages resources. Use this Operator to create custom resources to automate Red Hat Single Sign-On administration in Openshift. When installing Ansible Automation Platform on Virtual Machines (VMs) the installer can automatically install and configure Red Hat Single Sign-On for use with private automation hub. When installing Ansible Automation Platform on Red Hat OpenShift Container Platform you must install Single Sign-On separately. This chapter describes the process to configure Red Hat Single Sign-On and integrate it with private automation hub when Ansible Automation Platform is installed on OpenShift Container Platform. Prerequisites You have access to Red Hat OpenShift Container Platform using an account with operator installation permissions. You have installed the catalog containing the Red Hat Ansible Automation Platform operators. You have installed the Red Hat Single Sign-On Operator. To install the Red Hat Single Sign-On Operator, follow the procedure in Installing Red Hat Single Sign-On using a custom resource in the Red Hat Single Sign-On documentation. 6.1. Creating a Keycloak instance When the Red Hat Single Sign-On Operator is installed you can create a Keycloak instance for use with Ansible Automation Platform. From here you provide an external Postgres or one will be created for you. Procedure Navigate to Operator Installed Operators . Select the rh-sso project. Select the Red Hat Single Sign-On Operator . On the Red Hat Single Sign-On Operator details page select Keycloak . Click Create instance . Click YAML view . The default Keycloak custom resource is as follows: apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso namespace: aap spec: externalAccess: enabled: true instances: 1 Click Create When deployment is complete, you can use this credential to login to the administrative console. You can find the credentials for the administrator in the credential-<custom-resource> (example keycloak) secret in the namespace. 6.2. Creating a Keycloak realm for Ansible Automation Platform Create a realm to manage a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. Procedure Navigate to Operator Installed Operators . Select the Red Hat Single Sign-On Operator project. Select the Keycloak Realm tab and click Create Keycloak Realm . On the Keycloak Realm form, select YAML view . Edit the YAML file as follows: kind: KeycloakRealm apiVersion: keycloak.org/v1alpha1 metadata: name: ansible-automation-platform-keycloakrealm namespace: rh-sso labels: app: sso realm: ansible-automation-platform spec: realm: id: ansible-automation-platform realm: ansible-automation-platform enabled: true displayName: Ansible Automation Platform instanceSelector: matchLabels: app: sso Field Description metadata.name Set a unique value in metadata for the name of the configuration resource (CR). metadata.namespace Set a unique value in metadata for the name of the configuration resource (CR). metadata.labels.app Set labels to a unique value. This is used when creating the client CR. metadata.labels.realm Set labels to a unique value. This is used when creating the client CR. spec.realm.id Set the realm name and id. These must be the same. spec.realm.realm Set the realm name and id. These must be the same. spec.realm.displayname Set the name to display. Click Create and wait for the process to complete. 6.3. Creating a Keycloak client Keycloak clients authenticate hub users with Red Hat Single Sign-On. When a user authenticates the request goes through the Keycloak client. When Single Sign-On validates or issues the OAuth token, the client provides the response to automation hub and the user can log in. Procedure Navigate to Operator Installed Operators . Select the Red Hat Single Sign-On Operator project. Select the Keycloak Client tab and click Create Keycloak Client . On the Keycloak Realm form, select YAML view . Replace the default YAML file with the following: kind: KeycloakClient apiVersion: keycloak.org/v1alpha1 metadata: name: automation-hub-client-secret labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform client: name: Automation Hub clientId: automation-hub secret: <client-secret> 1 clientAuthenticatorType: client-secret description: Client for automation hub attributes: user.info.response.signature.alg: RS256 request.object.signature.alg: RS256 directAccessGrantsEnabled: true publicClient: true protocol: openid-connect standardFlowEnabled: true protocolMappers: - config: access.token.claim: "true" claim.name: "family_name" id.token.claim: "true" jsonType.label: String user.attribute: lastName userinfo.token.claim: "true" consentRequired: false name: family name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: userinfo.token.claim: "true" user.attribute: email id.token.claim: "true" access.token.claim: "true" claim.name: email jsonType.label: String name: email protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: multivalued: "true" access.token.claim: "true" claim.name: "resource_access.USD{client_id}.roles" jsonType.label: String name: client roles protocol: openid-connect protocolMapper: oidc-usermodel-client-role-mapper consentRequired: false - config: userinfo.token.claim: "true" user.attribute: firstName id.token.claim: "true" access.token.claim: "true" claim.name: given_name jsonType.label: String name: given name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: id.token.claim: "true" access.token.claim: "true" userinfo.token.claim: "true" name: full name protocol: openid-connect protocolMapper: oidc-full-name-mapper consentRequired: false - config: userinfo.token.claim: "true" user.attribute: username id.token.claim: "true" access.token.claim: "true" claim.name: preferred_username jsonType.label: String name: <username> protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: access.token.claim: "true" claim.name: "group" full.path: "true" id.token.claim: "true" userinfo.token.claim: "true" consentRequired: false name: group protocol: openid-connect protocolMapper: oidc-group-membership-mapper - config: multivalued: 'true' id.token.claim: 'true' access.token.claim: 'true' userinfo.token.claim: 'true' usermodel.clientRoleMapping.clientId: 'automation-hub' claim.name: client_roles jsonType.label: String name: client_roles protocolMapper: oidc-usermodel-client-role-mapper protocol: openid-connect - config: id.token.claim: "true" access.token.claim: "true" included.client.audience: 'automation-hub' protocol: openid-connect name: audience mapper protocolMapper: oidc-audience-mapper roles: - name: "hubadmin" description: "An administrator role for automation hub" 1 Replace this with a unique value. Click Create and wait for the process to complete. When automation hub is deployed, you must update the client with the "Valid Redirect URIs" and "Web Origins" as described in Updating the Red Hat Single Sign-On client Additionally, the client comes pre-configured with token mappers, however, if your authentication provider does not provide group data to Red Hat SSO, then the group mapping must be updated to reflect how that information is passed. This is commonly by user attribute. 6.4. Creating a Keycloak user This procedure creates a Keycloak user, with the hubadmin role, that can log in to automation hub with Super Administration privileges. Procedure Navigate to Operator Installed Operators . Select the Red Hat Single Sign-On Operator project. Select the Keycloak Realm tab and click Create Keycloak User . On the Keycloak User form, select YAML view . Replace the default YAML file with the following: apiVersion: keycloak.org/v1alpha1 kind: KeycloakUser metadata: name: hubadmin-user labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform user: username: hub_admin firstName: Hub lastName: Admin email: [email protected] enabled: true emailVerified: false credentials: - type: password value: <ch8ngeme> clientRoles: automation-hub: - hubadmin Click Create and wait for the process to complete. When a user is created, the Operator creates a Secret containing both the username and password using the following naming pattern: credential-<realm name>-<username>-<namespace> . In this example the credential is called credential-ansible-automation-platform-hub-admin-rh-sso . When a user is created the Operator does not update the user's password. Password changes are not reflected in the Secret. 6.5. Installing the Ansible Automation Platform Operator Procedure Navigate to Operator Operator Hub and search for the Ansible Automation Platform Operator. Select the Ansible Automation Platform Operator project. Click on the Operator tile. Click Install . Select a Project to install the Operator into. Red Hat recommends using the Operator recommended Namespace name. If you want to install the Operator into a project other than the recommended one, select Create Project from the drop down menu. Enter the Project name. Click Create . Click Install . When the Operator has been installed, click View Operator . 6.6. Creating a Red Hat Single Sign-On connection secret Procedure Navigate to https://<sso_host>/auth/realms/ansible-automation-platform . Copy the public_key value. In the OpenShift Web UI, navigate to Workloads Secrets . Select the ansible-automation-platform project. Click Create , and select From YAML . Edit the following YAML to create the secret apiVersion: v1 kind: Secret metadata: name: automation-hub-sso 1 namespace: ansible-automation-platform type: Opaque stringData: keycloak_host: "keycloak-rh-sso.apps-crc.testing" keycloak_port: "443" keycloak_protocol: "https" keycloak_realm: "ansible-automation-platform" keycloak_admin_role: "hubadmin" social_auth_keycloak_key: "automation-hub" social_auth_keycloak_secret: "client-secret" 2 social_auth_keycloak_public_key: >- 3 1 This name is used in the step when creating the automation hub instance. 2 If the secret was changed when creating the Keycloak client for automation hub be sure to change this value to match. 3 Enter the value of the public_key copied in Installing the Ansible Automation Platform Operator . Click Create and wait for the process to complete. 6.7. Installing automation hub using the Operator Use the following procedure to install automation hub using the operator. Procedure Navigate to Operator Installed Operators . Select the Ansible Automation Platform. Select the Automation hub tab and click Create Automation hub . Select YAML view . The YAML should be similar to: apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: private-ah 1 namespace: ansible-automation-platform spec: sso_secret: automation-hub-sso 2 pulp_settings: verify_ssl: false route_tls_termination_mechanism: Edge ingress_type: Route loadbalancer_port: 80 file_storage_size: 100Gi image_pull_policy: IfNotPresent web: replicas: 1 file_storage_access_mode: ReadWriteMany content: log_level: INFO replicas: 2 postgres_storage_requirements: limits: storage: 50Gi requests: storage: 8Gi api: log_level: INFO replicas: 1 postgres_resource_requirements: limits: cpu: 1000m memory: 8Gi requests: cpu: 500m memory: 2Gi loadbalancer_protocol: http resource_manager: replicas: 1 worker: replicas: 2 1 Set metadata.name to the name to use for the instance. 2 Set spec.sso_secret to the name of the secret created in Creating a Secret to hold the Red Hat Single Sign On connection details . Note This YAML turns off SSL verification ( ssl_verify: false ). If you are not using self-signed certificates for OpenShift this setting can be removed. Click Create and wait for the process to complete. 6.8. Determining the automation hub Route Use the following procedure to determine the hub route. Procedure Navigate to Networking Routes . Select the project you used for the install. Copy the location of the private-ah-web-svc service. The name of the service is different if you used a different name when creating the automation hub instance. This is used later to update the Red Hat Single Sign-On client. 6.9. Updating the Red Hat Single Sign-On client When automation hub is installed and you know the URL of the instance, you must update the Red Hat Single Sign-On to set the Valid Redirect URIs and Web Origins settings. Procedure Navigate to Operator Installed Operators . Select the RH-SSO project. Click Red Hat Single Sign-On Operator . Select Keycloak Client . Click on the automation-hub-client-secret client. Select YAML . Update the Client YAML to add the Valid Redirect URIs and Web Origins settings. redirectUris: - 'https://private-ah-ansible-automation-platform.apps-crc.testing/*' webOrigins: - 'https://private-ah-ansible-automation-platform.apps-crc.testing' Field Description redirectURIs This is the location determined in Determine Automation Hub Route . Be sure to add the /* to the end of the redirectUris setting. webOrigins This is the location determined in Determine Automation Hub Route . Note Ensure the indentation is correct when entering these settings. Click Save . To verify connectivity Navigate to the automation hub route. Enter the hub_admin user credentials and sign in. Red Hat Single Sign-On processes the authentication and redirects back to automation hub. 6.10. Additional resources For more information on running operators on OpenShift Container Platform, see Working with Operators in OpenShift Container Platform in the OpenShift Container Platform product documentation. | [
"apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso namespace: aap spec: externalAccess: enabled: true instances: 1",
"kind: KeycloakRealm apiVersion: keycloak.org/v1alpha1 metadata: name: ansible-automation-platform-keycloakrealm namespace: rh-sso labels: app: sso realm: ansible-automation-platform spec: realm: id: ansible-automation-platform realm: ansible-automation-platform enabled: true displayName: Ansible Automation Platform instanceSelector: matchLabels: app: sso",
"kind: KeycloakClient apiVersion: keycloak.org/v1alpha1 metadata: name: automation-hub-client-secret labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform client: name: Automation Hub clientId: automation-hub secret: <client-secret> 1 clientAuthenticatorType: client-secret description: Client for automation hub attributes: user.info.response.signature.alg: RS256 request.object.signature.alg: RS256 directAccessGrantsEnabled: true publicClient: true protocol: openid-connect standardFlowEnabled: true protocolMappers: - config: access.token.claim: \"true\" claim.name: \"family_name\" id.token.claim: \"true\" jsonType.label: String user.attribute: lastName userinfo.token.claim: \"true\" consentRequired: false name: family name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: userinfo.token.claim: \"true\" user.attribute: email id.token.claim: \"true\" access.token.claim: \"true\" claim.name: email jsonType.label: String name: email protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: multivalued: \"true\" access.token.claim: \"true\" claim.name: \"resource_access.USD{client_id}.roles\" jsonType.label: String name: client roles protocol: openid-connect protocolMapper: oidc-usermodel-client-role-mapper consentRequired: false - config: userinfo.token.claim: \"true\" user.attribute: firstName id.token.claim: \"true\" access.token.claim: \"true\" claim.name: given_name jsonType.label: String name: given name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: id.token.claim: \"true\" access.token.claim: \"true\" userinfo.token.claim: \"true\" name: full name protocol: openid-connect protocolMapper: oidc-full-name-mapper consentRequired: false - config: userinfo.token.claim: \"true\" user.attribute: username id.token.claim: \"true\" access.token.claim: \"true\" claim.name: preferred_username jsonType.label: String name: <username> protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: access.token.claim: \"true\" claim.name: \"group\" full.path: \"true\" id.token.claim: \"true\" userinfo.token.claim: \"true\" consentRequired: false name: group protocol: openid-connect protocolMapper: oidc-group-membership-mapper - config: multivalued: 'true' id.token.claim: 'true' access.token.claim: 'true' userinfo.token.claim: 'true' usermodel.clientRoleMapping.clientId: 'automation-hub' claim.name: client_roles jsonType.label: String name: client_roles protocolMapper: oidc-usermodel-client-role-mapper protocol: openid-connect - config: id.token.claim: \"true\" access.token.claim: \"true\" included.client.audience: 'automation-hub' protocol: openid-connect name: audience mapper protocolMapper: oidc-audience-mapper roles: - name: \"hubadmin\" description: \"An administrator role for automation hub\"",
"apiVersion: keycloak.org/v1alpha1 kind: KeycloakUser metadata: name: hubadmin-user labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform user: username: hub_admin firstName: Hub lastName: Admin email: [email protected] enabled: true emailVerified: false credentials: - type: password value: <ch8ngeme> clientRoles: automation-hub: - hubadmin",
"apiVersion: v1 kind: Secret metadata: name: automation-hub-sso 1 namespace: ansible-automation-platform type: Opaque stringData: keycloak_host: \"keycloak-rh-sso.apps-crc.testing\" keycloak_port: \"443\" keycloak_protocol: \"https\" keycloak_realm: \"ansible-automation-platform\" keycloak_admin_role: \"hubadmin\" social_auth_keycloak_key: \"automation-hub\" social_auth_keycloak_secret: \"client-secret\" 2 social_auth_keycloak_public_key: >- 3",
"apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: private-ah 1 namespace: ansible-automation-platform spec: sso_secret: automation-hub-sso 2 pulp_settings: verify_ssl: false route_tls_termination_mechanism: Edge ingress_type: Route loadbalancer_port: 80 file_storage_size: 100Gi image_pull_policy: IfNotPresent web: replicas: 1 file_storage_access_mode: ReadWriteMany content: log_level: INFO replicas: 2 postgres_storage_requirements: limits: storage: 50Gi requests: storage: 8Gi api: log_level: INFO replicas: 1 postgres_resource_requirements: limits: cpu: 1000m memory: 8Gi requests: cpu: 500m memory: 2Gi loadbalancer_protocol: http resource_manager: replicas: 1 worker: replicas: 2",
"redirectUris: - 'https://private-ah-ansible-automation-platform.apps-crc.testing/*' webOrigins: - 'https://private-ah-ansible-automation-platform.apps-crc.testing'"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/using-rhsso-operator-with-automation-hub |
13.2.3. Add to /etc/fstab | 13.2.3. Add to /etc/fstab As root, edit the /etc/fstab file to include the new partition using the partition's UUID. Use the command blkid -o list for a complete list of the partition's UUID, or blkid device for individual device details. The first column should contain UUID= followed by the file system's UUID. The second column should contain the mount point for the new partition, and the column should be the file system type (for example, ext3 or swap). If you need more information about the format, read the man page with the command man fstab . If the fourth column is the word defaults , the partition is mounted at boot time. To mount the partition without rebooting, as root, type the command: mount /work | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s3-disk-storage-parted-create-part-fstab |
Machine APIs | Machine APIs OpenShift Container Platform 4.13 Reference guide for machine APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_apis/index |
Architecture | Architecture OpenShift Dedicated 4 Architecture overview. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/architecture/index |
Chapter 50. Guided rules | Chapter 50. Guided rules Guided rules are business rules that you create in a UI-based guided rules designer in Business Central that leads you through the rule-creation process. The guided rules designer provides fields and options for acceptable input based on the data objects for the rule being defined. The guided rules that you define are compiled into Drools Rule Language (DRL) rules as with all other rule assets. All data objects related to a guided rule must be in the same project package as the guided rule. Assets in the same package are imported by default. After you create the necessary data objects and the guided rule, you can use the Data Objects tab of the guided rules designer to verify that all required data objects are listed or to import other existing data objects by adding a New item . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-rules-con_guided-rules |
1.4. Routing Methods | 1.4. Routing Methods Red Hat Enterprise Linux uses Network Address Translation or NAT routing for LVS, which allows the administrator tremendous flexibility when utilizing available hardware and integrating the LVS into an existing network. 1.4.1. NAT Routing Figure 1.3, "LVS Implemented with NAT Routing" , illustrates LVS utilizing NAT routing to move requests between the Internet and a private network. Figure 1.3. LVS Implemented with NAT Routing In the example, there are two NICs in the active LVS router. The NIC for the Internet has a real IP address on eth0 and has a floating IP address aliased to eth0:1. The NIC for the private network interface has a real IP address on eth1 and has a floating IP address aliased to eth1:1. In the event of failover, the virtual interface facing the Internet and the private facing virtual interface are taken-over by the backup LVS router simultaneously. All of the real servers located on the private network use the floating IP for the NAT router as their default route to communicate with the active LVS router so that their abilities to respond to requests from the Internet is not impaired. In this example, the LVS router's public LVS floating IP address and private NAT floating IP address are aliased to two physical NICs. While it is possible to associate each floating IP address to its own physical device on the LVS router nodes, having more than two NICs is not a requirement. Using this topology, the active LVS router receives the request and routes it to the appropriate server. The real server then processes the request and returns the packets to the LVS router which uses network address translation to replace the address of the real server in the packets with the LVS routers public VIP address. This process is called IP masquerading because the actual IP addresses of the real servers is hidden from the requesting clients. Using this NAT routing, the real servers may be any kind of machine running various operating systems. The main disadvantage is that the LVS router may become a bottleneck in large cluster deployments because it must process outgoing as well as incoming requests. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-routing-VSA |
Chapter 3. Accessing the Red Hat Support | Chapter 3. Accessing the Red Hat Support If you require help with troubleshooting a problem, you can contact Red Hat Support. Procedure Log in to the Red Hat Support web site and choose one of the following options: Open a new support case. Initiate a live chat with a Red Hat expert. Contact a Red Hat expert by making a call or sending an email. 3.1. Using the sosreport utility to collect daignostic information about a system to attach it to a support ticket The sosreport command collects configuration details, system information and diagnostic information from a Red Hat Enterprise Linux system. The following section describes how to use the sosreport command to produce reports for your support cases. Prerequisites A valid user account on the Red Hat Customer Portal. See Create a Red Hat Login . An active subscription for the RHEL system. A support-case number. Procedure Install the sos package: Generate a report: Optionally, pass the -upload option to the command to automatically upload and attach the report to a support case. This requires internet access and your Customer Portal credentials. Optional: Manually attach the report to your support case. See the Red Hat Knowledgebase solution How can I attach a file to a Red Hat support case? for more information. Additional resources What is an sosreport and how to create one in Red Hat Enterprise Linux? (Red Hat Knowledgebase) | [
"dnf install sos",
"sosreport"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/assembly_accessing-the-red-hat-support_configuring-basic-system-settings |
Chapter 13. Collecting and storing Kubernetes events | Chapter 13. Collecting and storing Kubernetes events The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by the logging subsystem. You must manually deploy the Event Router. The Event Router collects events from all projects and writes them to STDOUT . The collector then forwards those events to the store defined in the ClusterLogForwarder custom resource (CR). Important The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed. 13.1. Deploying and configuring the Event Router Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the openshift-logging project to ensure it collects events from across the cluster. The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can use this template without making changes, or change the deployment object CPU and memory requests. Prerequisites You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role. The logging subsystem for Red Hat OpenShift must be installed. Procedure Create a template for the Event Router: kind: Template apiVersion: template.openshift.io/v1 metadata: name: eventrouter-template annotations: description: "A pod forwarding kubernetes events to OpenShift Logging stack." tags: "events,EFK,logging,cluster-logging" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "watch", "list"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { "sink": "stdout" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" spec: selector: matchLabels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" replicas: 1 template: metadata: labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: "registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4" - name: CPU 7 displayName: CPU value: "100m" - name: MEMORY 8 displayName: Memory value: "128Mi" - name: NAMESPACE displayName: Namespace value: "openshift-logging" 9 1 Creates a Service Account in the openshift-logging project for the Event Router. 2 Creates a ClusterRole to monitor for events in the cluster. 3 Creates a ClusterRoleBinding to bind the ClusterRole to the service account. 4 Creates a config map in the openshift-logging project to generate the required config.json file. 5 Creates a deployment in the openshift-logging project to generate and configure the Event Router pod. 6 Specifies the image, identified by a tag such as v0.4 . 7 Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to 100m . 8 Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to 128Mi . 9 Specifies the openshift-logging project to install objects in. Use the following command to process and apply the template: USD oc process -f <templatefile> | oc apply -n openshift-logging -f - For example: USD oc process -f eventrouter.yaml | oc apply -n openshift-logging -f - Example output serviceaccount/eventrouter created clusterrole.authorization.openshift.io/event-reader created clusterrolebinding.authorization.openshift.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created Validate that the Event Router installed in the openshift-logging project: View the new Event Router pod: USD oc get pods --selector component=eventrouter -o name -n openshift-logging Example output pod/cluster-logging-eventrouter-d649f97c8-qvv8r View the events collected by the Event Router: USD oc logs <cluster_logging_eventrouter_pod> -n openshift-logging For example: USD oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging Example output {"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}} You can also use Kibana to view events by creating an index pattern using the Elasticsearch infra index. | [
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.authorization.openshift.io/event-reader created clusterrolebinding.authorization.openshift.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-eventrouter |
5.5. Managing High-Availability Services | 5.5. Managing High-Availability Services In addition to adding and modifying a service, as described in Section 4.10, "Adding a Cluster Service to the Cluster" , you can perform the following management functions for high-availability services through the luci server component of Conga : Start a service Restart a service Disable a service Delete a service Relocate a service From the cluster-specific page, you can manage services for that cluster by clicking on Service Groups along the top of the cluster display. This displays the services that have been configured for that cluster. Starting a service - To start any services that are not currently running, select any services you want to start by clicking the check box for that service and clicking Start . Restarting a service - To restart any services that are currently running, select any services you want to restart by clicking the check box for that service and clicking Restart . Disabling a service - To disable any service that is currently running, select any services you want to disable by clicking the check box for that service and clicking Disable . Deleting a service - To delete any services that are not currently running, select any services you want to disable by clicking the check box for that service and clicking Delete . Relocating a service - To relocate a running service, click on the name of the service in the services display. This causes the services configuration page for the service to be displayed, with a display indicating on which node the service is currently running. From the Start on node... drop-down box, select the node on which you want to relocate the service, and click on the Start icon. A message appears at the top of the screen indicating that the service is being started. You may need to refresh the screen to see the new display indicating that the service is running on the node you have selected. Note If the running service you have selected is a vm service, the drop-down box will show a migrate option instead of a relocate option. Note You can also start, restart, disable or delete an individual service by clicking on the name of the service on the Services page. This displays the service configuration page. At the top right corner of the service configuration page are the same icons for Start , Restart , Disable , and Delete . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-manage-ha-services-conga-ca |
Chapter 1. Overview | Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 3, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 6, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/overview |
Chapter 8. Secret [v1] | Chapter 8. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data object (string) Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4 immutable boolean Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata stringData object (string) stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API. type string Used to facilitate programmatic handling of secret data. More info: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types 8.2. API endpoints The following API endpoints are available: /api/v1/secrets GET : list or watch objects of kind Secret /api/v1/watch/secrets GET : watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/secrets DELETE : delete collection of Secret GET : list or watch objects of kind Secret POST : create a Secret /api/v1/watch/namespaces/{namespace}/secrets GET : watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/secrets/{name} DELETE : delete a Secret GET : read the specified Secret PATCH : partially update the specified Secret PUT : replace the specified Secret /api/v1/watch/namespaces/{namespace}/secrets/{name} GET : watch changes to an object of kind Secret. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 8.2.1. /api/v1/secrets Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Secret Table 8.2. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/secrets Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/secrets Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Secret Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Secret Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty HTTP method POST Description create a Secret Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Secret schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Secret schema 201 - Created Secret schema 202 - Accepted Secret schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/secrets Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/secrets/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the Secret namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Secret Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Secret Table 8.23. HTTP responses HTTP code Reponse body 200 - OK Secret schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Secret Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Secret schema 201 - Created Secret schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Secret Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Secret schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Secret schema 201 - Created Secret schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/secrets/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the Secret namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Secret. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/secret-v1 |
Chapter 1. Introduction to Service Telemetry Framework release | Chapter 1. Introduction to Service Telemetry Framework release This release of Service Telemetry Framework (STF) provides new features and resolved issues specific to STF. STF uses components from other Red Hat products. For specific information pertaining to the support of these components, see https://access.redhat.com/site/support/policy/updates/openstack/platform/ and https://access.redhat.com/support/policy/updates/openshift/ . STF 1.5 is compatible with OpenShift Container Platform version 4.14 and 4.16 as the deployment platform. 1.1. Product support The Red Hat Customer Portal offers resources to guide you through the installation and configuration of Service Telemetry Framework. The following types of documentation are available through the Customer Portal: Product documentation Knowledge base articles and solutions Technical briefs Support case management You can access the Customer Portal at https://access.redhat.com/ . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_release_notes_1.5/assembly-introduction-to-service-telemetry-framework-release_osp |
function::user_string | function::user_string Name function::user_string - Retrieves string from user space. Synopsis Arguments addr The user space address to retrieve the string from. General Syntax user_string:string(addr:long) Description Returns the null terminated C string from a given user space memory address. Reports " <unknown> " on the rare cases when userspace data is not accessible. | [
"function user_string:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-string |
Chapter 8. Task Time Tapset | Chapter 8. Task Time Tapset This tapset defines utility functions to query time related properties of the current tasks, translate those in miliseconds and human readable strings. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/task_time_stp |
Chapter 11. Changing the MTU for the cluster network | Chapter 11. Changing the MTU for the cluster network As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters using the OVN-Kubernetes or OpenShift SDN network plugins. 11.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not usually need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure. Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance. You can change the cluster MTU for only the OVN-Kubernetes and OpenShift SDN cluster network plugins. 11.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 11.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin: OVN-Kubernetes : 100 bytes OpenShift SDN : 50 bytes If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 11.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 11.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. For OVN-Kubernetes, the overhead is 100 bytes and for OpenShift SDN the overhead is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 11.2. Changing the cluster MTU As a cluster administrator, you can change the maximum transmission unit (MTU) for your cluster. The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update rolls out. The following procedure describes how to change the cluster MTU by using either machine configs, DHCP, or an ISO. If you use the DHCP or ISO approach, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You identified the target MTU for your cluster. The correct MTU varies depending on the network plugin that your cluster uses: OVN-Kubernetes : The cluster MTU must be set to 100 less than the lowest hardware MTU value in your cluster. OpenShift SDN : The cluster MTU must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To increase or decrease the MTU for the cluster network complete the following procedure. To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23 ... Prepare your configuration for the hardware MTU: If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: dhcp-option-force=26,<mtu> where: <mtu> Specifies the hardware MTU for the DHCP server to advertise. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. Find the primary network interface: If you are using the OpenShift SDN network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }' where: <node_name> Specifies the name of a node in your cluster. If you are using the OVN-Kubernetes network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 where: <node_name> Specifies the name of a node in your cluster. Create the following NetworkManager configuration in the <interface>-mtu.conf file: Example NetworkManager connection configuration [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> where: <mtu> Specifies the new hardware MTU value. <interface> Specifies the primary network interface name. Create two MachineConfig objects, one for the control plane nodes and another for the worker nodes in your cluster: Create the following Butane config in the control-plane-interface.bu file: variant: openshift version: 4.14.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create the following Butane config in the worker-interface.bu file: variant: openshift version: 4.14.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create MachineConfig objects from the Butane configs by running the following command: USD for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done Warning Do not apply these machine configs until explicitly instructed later in this procedure. Applying these machine configs now causes a loss of stability for the cluster. To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value for <machine_to> and for OVN-Kubernetes must be 100 less and for OpenShift SDN must be 50 less. <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Update the underlying network interface MTU value: If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. USD for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep path: If the machine config is successfully deployed, the output contains the /etc/NetworkManager/conf.d/99-<interface>-mtu.conf file path and the ExecStart=/usr/local/bin/mtu-migration.sh line. To finalize the MTU migration, enter one of the following commands: If you are using the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . If you are using the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . After finalizing the MTU migration, each MCP node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification You can verify that a node in your cluster uses an MTU that you specified in the procedure. To get the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Get the current MTU for the primary network interface of a node. To list the nodes in your cluster, enter the following command: USD oc get nodes To obtain the current MTU setting for the primary network interface on a node, enter the following command: USD oc debug node/<node> -- chroot /host ip address show <interface> where: <node> Specifies a node from the output from the step. <interface> Specifies the primary network interface name for the node. Example output ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 11.3. Additional resources Using advanced networking options for PXE and ISO installations Manually creating NetworkManager profiles in key file format Configuring a dynamic Ethernet connection using nmcli | [
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.14.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.14.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get mcp",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/changing-cluster-network-mtu |
Chapter 5. Minimum hardware recommendations for containerized Ceph | Chapter 5. Minimum hardware recommendations for containerized Ceph Ceph can run on non-proprietary commodity hardware. Small production clusters and development clusters can run without performance optimization with modest hardware. Process Criteria Minimum Recommended ceph-osd-container Processor 1x AMD64 or Intel 64 CPU CORE per OSD container RAM Minimum of 5 GB of RAM per OSD container OS Disk 1x OS disk per host OSD Storage 1x storage drive per OSD container. Cannot be shared with OS Disk. block.db Optional, but Red Hat recommended, 1x SSD or NVMe or Optane partition or lvm per daemon. Sizing is 4% of block.data for BlueStore for object, file and mixed workloads and 1% of block.data for the BlueStore for Block Device, Openstack cinder, and Openstack cinder workloads. block.wal Optionally, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it's faster than the block.db device. Network 2x 10 GB Ethernet NICs ceph-mon-container Processor 1x AMD64 or Intel 64 CPU CORE per mon-container RAM 3 GB per mon-container Disk Space 10 GB per mon-container , 50 GB Recommended Monitor Disk Optionally, 1x SSD disk for Monitor rocksdb data Network 2x 1GB Ethernet NICs, 10 GB Recommended ceph-mgr-container Processor 1x AMD64 or Intel 64 CPU CORE per mgr-container RAM 3 GB per mgr-container Network 2x 1GB Ethernet NICs, 10 GB Recommended ceph-radosgw-container Processor 1x AMD64 or Intel 64 CPU CORE per radosgw-container RAM 1 GB per daemon Disk Space 5 GB per daemon Network 1x 1GB Ethernet NICs ceph-mds-container Processor 1x AMD64 or Intel 64 CPU CORE per mds-container RAM 3 GB per mds-container This number is highly dependent on the configurable MDS cache size. The RAM requirement is typically twice as much as the amount set in the mds_cache_memory_limit configuration setting. Note also that this is the memory for your daemon, not the overall system memory. Disk Space 2 GB per mds-container , plus taking into consideration any additional space required for possible debug logging, 20GB is a good start. Network 2x 1GB Ethernet NICs, 10 GB Recommended Note that this is the same network as the OSD containers. If you have a 10 GB network on your OSDs you should use the same on your MDS so that the MDS is not disadvantaged when it comes to latency. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/hardware_guide/minimum-hardware-recommendations-for-containerized-ceph_hw |
probe::ipmib.FragFails | probe::ipmib.FragFails Name probe::ipmib.FragFails - Count datagram fragmented unsuccessfully Synopsis ipmib.FragFails Values op Value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global FragFails (equivalent to SNMP's MIB IPSTATS_MIB_FRAGFAILS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-fragfails |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.