title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 1. Overview
Chapter 1. Overview Red Hat OpenShift Data Foundation is software-defined storage that is optimized for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. Red Hat OpenShift Data Foundation is integrated into the latest Red Hat OpenShift Container Platform to address platform services, application portability, and persistence challenges. It provides a highly scalable backend for the generation of cloud-native applications, built on a technology stack that includes Red Hat Ceph Storage, the Rook.io Operator, and NooBaa's Multicloud Object Gateway technology. Red Hat OpenShift Data Foundation is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS Validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards . Red Hat OpenShift Data Foundation provides a trusted, enterprise-grade application development environment that simplifies and enhances the user experience across the application lifecycle in a number of ways: Provides block storage for databases. Shared file storage for continuous integration, messaging, and data aggregation. Object storage for cloud-first development, archival, backup, and media storage. Scale applications and data exponentially. Attach and detach persistent data volumes at an accelerated rate. Stretch clusters across multiple data-centers or availability zones. Establish a comprehensive application container registry. Support the generation of OpenShift workloads such as Data Analytics, Artificial Intelligence, Machine Learning, Deep Learning, and Internet of Things (IoT). Dynamically provision not only application containers, but data service volumes and containers, as well as additional OpenShift Container Platform nodes, Elastic Block Store (EBS) volumes and other infrastructure services. 1.1. About this release Red Hat OpenShift Data Foundation 4.16 ( RHSA-2024:4591 ) is now available. New enhancements, features, and known issues that pertain to OpenShift Data Foundation 4.16 are included in this topic. Red Hat OpenShift Data Foundation 4.16 is supported on the Red Hat OpenShift Container Platform version 4.16. For more information, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For Red Hat OpenShift Data Foundation life cycle information, refer to the layered and dependent products life cycle section in Red Hat OpenShift Container Platform Life Cycle Policy .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/overview
Chapter 4. Troubleshooting Ceph Monitors
Chapter 4. Troubleshooting Ceph Monitors This chapter contains information on how to fix the most common errors related to the Ceph Monitors. 4.1. Prerequisites Verify the network connection. 4.2. Most common Ceph Monitor errors The following tables list the most common error messages that are returned by the ceph health detail command, or included in the Ceph logs. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix the problems. 4.2.1. Prerequisites A running Red Hat Ceph Storage cluster. 4.2.2. Ceph Monitor error messages A table of common Ceph Monitor error messages, and a potential fix. Error message See HEALTH_WARN mon.X is down (out of quorum) Ceph Monitor is out of quorum clock skew Clock skew store is getting too big! The Ceph Monitor store is getting too big 4.2.3. Common Ceph Monitor error messages in the Ceph logs A table of common Ceph Monitor error messages found in the Ceph logs, and a link to a potential fix. Error message Log file See clock skew Main cluster log Clock skew clocks not synchronized Main cluster log Clock skew Corruption: error in middle of record Monitor log Ceph Monitor is out of quorum Recovering the Ceph Monitor store Corruption: 1 missing files Monitor log Ceph Monitor is out of quorum Recovering the Ceph Monitor store Caught signal (Bus error) Monitor log Ceph Monitor is out of quorum 4.2.4. Ceph Monitor is out of quorum One or more Ceph Monitors are marked as down but the other Ceph Monitors are still able to form a quorum. In addition, the ceph health detail command returns an error message similar to the following one: What This Means Ceph marks a Ceph Monitor as down due to various reasons. If the ceph-mon daemon is not running, it might have a corrupted store or some other error is preventing the daemon from starting. Also, the /var/ partition might be full. As a consequence, ceph-mon is not able to perform any operations to the store located by default at /var/lib/ceph/mon- SHORT_HOST_NAME /store.db and terminates. If the ceph-mon daemon is running but the Ceph Monitor is out of quorum and marked as down , the cause of the problem depends on the Ceph Monitor state: If the Ceph Monitor is in the probing state longer than expected, it cannot find the other Ceph Monitors. This problem can be caused by networking issues, or the Ceph Monitor can have an outdated Ceph Monitor map ( monmap ) and be trying to reach the other Ceph Monitors on incorrect IP addresses. Alternatively, if the monmap is up-to-date, Ceph Monitor's clock might not be synchronized. If the Ceph Monitor is in the electing state longer than expected, the Ceph Monitor's clock might not be synchronized. If the Ceph Monitor changes its state from synchronizing to electing and back, the cluster state is advancing. This means that it is generating new maps faster than the synchronization process can handle. If the Ceph Monitor marks itself as the leader or a peon , then it believes to be in a quorum, while the remaining cluster is sure that it is not. This problem can be caused by failed clock synchronization. To Troubleshoot This Problem Verify that the ceph-mon daemon is running. If not, start it: Replace HOST_NAME with the short name of the host where the daemon is running. Use the hostname -s command when unsure. If you are not able to start ceph-mon , follow the steps in The ceph-mon daemon cannot start . If you are able to start the ceph-mon daemon but is is marked as down , follow the steps in The ceph-mon daemon is running, but marked as `down` . The ceph-mon Daemon Cannot Start Check the corresponding Ceph Monitor log, by default located at /var/log/ceph/ceph-mon. HOST_NAME .log . If the log contains error messages similar to the following ones, the Ceph Monitor might have a corrupted store. To fix this problem, replace the Ceph Monitor. See Replacing a failed monitor . If the log contains an error message similar to the following one, the /var/ partition might be full. Delete any unnecessary data from /var/ . Important Do not delete any data from the Monitor directory manually. Instead, use the ceph-monstore-tool to compact it. See Compacting the Ceph Monitor store for details. If you see any other error messages, open a support ticket. See Contacting Red Hat Support for service for details. The ceph-mon Daemon Is Running, but Still Marked as down From the Ceph Monitor host that is out of the quorum, use the mon_status command to check its state: Replace ID with the ID of the Ceph Monitor, for example: If the status is probing , verify the locations of the other Ceph Monitors in the mon_status output. If the addresses are incorrect, the Ceph Monitor has incorrect Ceph Monitor map ( monmap ). To fix this problem, see Injecting a Ceph Monitor map . If the addresses are correct, verify that the Ceph Monitor clocks are synchronized. See Clock skew for details. In addition, troubleshoot any networking issues, see Troubleshooting Networking issues for details. If the status is electing , verify that the Ceph Monitor clocks are synchronized. See Clock skew for details. If the status changes from electing to synchronizing , open a support ticket. See Contacting Red Hat Support for service for details. If the Ceph Monitor is the leader or a peon , verify that the Ceph Monitor clocks are synchronized. See Clock skew for details. Open a support ticket if synchronizing the clocks does not solve the problem. See Contacting Red Hat Support for service for details. Additional Resources See Understanding Ceph Monitor status The Starting, Stopping, Restarting the Ceph daemons by instance section in the Administration Guide for Red Hat Ceph Storage 4 The Using the Ceph Administration Socket section in the Administration Guide for Red Hat Ceph Storage 4 4.2.5. Clock skew A Ceph Monitor is out of quorum, and the ceph health detail command output contains error messages similar to these: In addition, Ceph logs contain error messages similar to these: What This Means The clock skew error message indicates that Ceph Monitors' clocks are not synchronized. Clock synchronization is important because Ceph Monitors depend on time precision and behave unpredictably if their clocks are not synchronized. The mon_clock_drift_allowed parameter determines what disparity between the clocks is tolerated. By default, this parameter is set to 0.05 seconds. Important Do not change the default value of mon_clock_drift_allowed without testing. Changing this value might affect the stability of the Ceph Monitors and the Ceph Storage Cluster in general. Possible causes of the clock skew error include network problems or problems with Network Time Protocol (NTP) synchronization if that is configured. In addition, time synchronization does not work properly on Ceph Monitors deployed on virtual machines. To Troubleshoot This Problem Verify that your network works correctly. For details, see Troubleshooting networking issues . In particular, troubleshoot any problems with NTP clients if you use NTP. If you use chrony for NTP, see Basic chrony NTP troubleshooting section for more information. If you use ntpd , see Basic NTP troubleshooting . If you use a remote NTP server, consider deploying your own NTP server on your network. For details, see the Using the Chrony suite to configure NTP chapter in the Configuring basic system settings for Red Hat Enterprise Linux 8. See the Configuring NTP Using ntpd chapter in the System Administrator's Guide for Red Hat Enterprise Linux 7. Note Ceph evaluates time synchronization every five minutes only so there will be a delay between fixing the problem and clearing the clock skew messages. Additional Resources Understanding Ceph Monitor status Ceph Monitor is out of quorum 4.2.6. The Ceph Monitor store is getting too big The ceph health command returns an error message similar to the following one: What This Means Ceph Monitors store is in fact a LevelDB database that stores entries as key-values pairs. The database includes a cluster map and is located by default at /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME /store.db . Querying a large Monitor store can take time. As a consequence, the Ceph Monitor can be delayed in responding to client queries. In addition, if the /var/ partition is full, the Ceph Monitor cannot perform any write operations to the store and terminates. See Ceph Monitor is out of quorum for details on troubleshooting this issue. To Troubleshoot This Problem Check the size of the database: Specify the name of the cluster and the short host name of the host where the ceph-mon is running. Example Compact the Ceph Monitor store. For details, see Compacting the Ceph Monitor Store . Additional Resources Ceph Monitor is out of quorum 4.2.7. Understanding Ceph Monitor status The mon_status command returns information about a Ceph Monitor, such as: State Rank Elections epoch Monitor map ( monmap ) If Ceph Monitors are able to form a quorum, use mon_status with the ceph command-line utility. If Ceph Monitors are not able to form a quorum, but the ceph-mon daemon is running, use the administration socket to execute mon_status . An example output of mon_status Ceph Monitor States Leader During the electing phase, Ceph Monitors are electing a leader. The leader is the Ceph Monitor with the highest rank, that is the rank with the lowest value. In the example above, the leader is mon.1 . Peon Peons are the Ceph Monitors in the quorum that are not leaders. If the leader fails, the peon with the highest rank becomes a new leader. Probing A Ceph Monitor is in the probing state if it is looking for other Ceph Monitors. For example after you start the Ceph Monitors, they are probing until they find enough Ceph Monitors specified in the Ceph Monitor map ( monmap ) to form a quorum. Electing A Ceph Monitor is in the electing state if it is in the process of electing the leader. Usually, this status changes quickly. Synchronizing A Ceph Monitor is in the synchronizing state if it is synchronizing with the other Ceph Monitors to join the quorum. The smaller the Ceph Monitor store it, the faster the synchronization process. Therefore, if you have a large store, synchronization takes longer time. Additional Resources For details, see the Using the Ceph Administration Socket section in the Administration Guide for Red Hat Ceph Storage 4. 4.2.8. Additional Resources See the Section 4.2.2, "Ceph Monitor error messages" in the Red Hat Ceph Storage Troubleshooting Guide . See the Section 4.2.3, "Common Ceph Monitor error messages in the Ceph logs" in the Red Hat Ceph Storage Troubleshooting Guide . 4.3. Injecting a monmap If a Ceph Monitor has an outdated or corrupted Ceph Monitor map ( monmap ), it cannot join a quorum because it is trying to reach the other Ceph Monitors on incorrect IP addresses. The safest way to fix this problem is to obtain and inject the actual Ceph Monitor map from other Ceph Monitors. Note This action overwrites the existing Ceph Monitor map kept by the Ceph Monitor. This procedure shows how to inject the Ceph Monitor map when the other Ceph Monitors are able to form a quorum, or when at least one Ceph Monitor has a correct Ceph Monitor map. If all Ceph Monitors have corrupted store and therefore also the Ceph Monitor map, see Recovering the Ceph Monitor store . Prerequisites Access to the Ceph Monitor Map. Root-level access to the Ceph Monitor node. Procedure If the remaining Ceph Monitors are able to form a quorum, get the Ceph Monitor map by using the ceph mon getmap command: If the remaining Ceph Monitors are not able to form the quorum and you have at least one Ceph Monitor with a correct Ceph Monitor map, copy it from that Ceph Monitor: Stop the Ceph Monitor which you want to copy the Ceph Monitor map from: For example, to stop the Ceph Monitor running on a host with the host1 short host name: Copy the Ceph Monitor map: Replace ID with the ID of the Ceph Monitor which you want to copy the Ceph Monitor map from: Stop the Ceph Monitor with the corrupted or outdated Ceph Monitor map: For example, to stop a Ceph Monitor running on a host with the host2 short host name: You can inject the Ceph Monitor map as a ceph user in two different ways: Run the command as a ceph user: Syntax Replace ID with the ID of the Ceph Monitor with the corrupted or outdated Ceph Monitor map: Example Run the command as a root user and then run chown to change the permissions: Run the command as a root user: Syntax Replace ID with the ID of the Ceph Monitor with the corrupted or outdated Ceph Monitor map: Example Change the file permissions: Example Start the Ceph Monitor: If you copied the Ceph Monitor map from another Ceph Monitor, start that Ceph Monitor, too: Additional Resources Ceph Monitor is out of quorum Recovering the Ceph Monitor store 4.4. Replacing a failed Monitor When a Monitor has a corrupted store, the recommended way to fix this problem is to replace the Monitor by using the Ansible automation application. Prerequisites A running Red Hat Ceph Storage cluster. Able to form a quroum. Root-level access to Ceph Monitor node. Procedure From the Monitor host, remove the Monitor store by default located at /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME : Specify the short host name of the Monitor host and the cluster name. For example, to remove the Monitor store of a Monitor running on host1 from a cluster called remote : Remove the Monitor from the Monitor map ( monmap ): Specify the short host name of the Monitor host and the cluster name. For example, to remove the Monitor running on host1 from a cluster called remote : Troubleshoot and fix any problems related to the underlying file system or hardware of the Monitor host. From the Ansible administration node, redeploy the Monitor by running the ceph-ansible playbook: Additional Resources See the Ceph Monitor is out of quorum for details. The Managing the storage cluster size chapter in the Red Hat Ceph Storage Operations Guide . The Deploying Red Hat Ceph Storage chapter in the Red Hat Ceph Storage 4 Installation Guide . 4.5. Compacting the monitor store When the Monitor store has grown big in size, you can compact it: Dynamically by using the ceph tell command. Upon the start of the ceph-mon daemon. By using the ceph-monstore-tool when the ceph-mon daemon is not running. Use this method when the previously mentioned methods fail to compact the Monitor store or when the Monitor is out of quorum and its log contains the Caught signal (Bus error) error message. Important Monitor store size changes when the cluster is not in the active+clean state or during the rebalancing process. For this reason, compact the Monitor store when rebalancing is completed. Also, ensure that the placement groups are in the active+clean state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure To compact the Monitor store when the ceph-mon daemon is running: Replace HOST_NAME with the short host name of the host where the ceph-mon is running. Use the hostname -s command when unsure. Add the following parameter to the Ceph configuration under the [mon] section: Restart the ceph-mon daemon: Replace HOST_NAME with the short name of the host where the daemon is running. Use the hostname -s command when unsure. Ensure that Monitors have formed a quorum: Repeat these steps on other Monitors if needed. Note Before you start, ensure that you have the ceph-test package installed. Verify that the ceph-mon daemon with the large store is not running. Stop the daemon if needed. Replace HOST_NAME with the short name of the host where the daemon is running. Use the hostname -s command when unsure. Compact the Monitor store as a ceph user in two different ways: Run the command as a ceph user: Syntax Example Run the command as a root user and then run chown to change the permissions: Run the command as a root user: Syntax Example Change the file permissions: Example Start ceph-mon again: Example Additional Resources The Ceph Monitor store is getting too big Ceph Monitor is out of quorum 4.6. Opening port for Ceph manager The ceph-mgr daemons receive placement group information from OSDs on the same range of ports as the ceph-osd daemons. If these ports are not open, a cluster will devolve from HEALTH_OK to HEALTH_WARN and will indicate that PGs are unknown with a percentage count of the PGs unknown. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to Ceph manager. Procedure To resolve this situation, for each host running ceph-mgr daemons, open ports 6800-7300 . Example Restart the ceph-mgr daemons. 4.7. Recovering the Ceph Monitor for bare-metal deployments If all the monitors are down in your Red Hat Ceph Storage cluster, and the ceph -s command does not execute as expected, you can recover the monitors using the monmaptool command. The monmaptool command rebuilds the Ceph monitor store from the keyring files of the daemons. Note This procedure is for bare-metal Red Hat Ceph Storage deployments only. For containerized Red Hat Ceph Storage deployments, see the Knowledgebase article MON recovery procedure for Red Hat Ceph Storage containerized deployment when all the three mon are down. . Prerequisites Bare-metal deployed Red Hat Ceph Storage cluster. Root-level access to all the nodes. All the Ceph monitors are down. Procedure Log into the monitor node. From the monitor nodes, if you are unable to access the OSD nodes without being the root user, then copy the public key pair to the OSD nodes: Generate the SSH key pair, accept the default file name, and leave the passphrase empty: Example Copy the public key to all the OSD nodes in the storage cluster: Example Stop the OSD daemon service on all the OSD nodes: Example To collect the cluster map from all the OSD nodes, create the recovery file and execute the script: Create the recovery file: Example Add the following content to the file and replace the OSD_NODES with either the IP addresses of all the OSD nodes or the hostname of the all OSD nodes in the Red Hat Ceph Storage cluster: Syntax Provide executable permission on the file: Example Execute the file to gather the keyrings of all the OSDs from all the OSD nodes in the storage cluster: Example Fetch the keyrings of other daemons from the respective nodes: For Ceph Monitor, the keyring is the same for all the Ceph monitors. Syntax Example For Ceph Manager, fetch the keyring from all the manager nodes: Syntax Example For Ceph OSDs, the keyrings are generated from the above script and stored in a temporary path: In this example, the OSD keyrings are stored in the /tmp/monstore/keyring file. For the client, fetch the keyring from all the client nodes: Syntax Example For meta data server (MDS), fetch the keyring from all the Ceph MDS nodes: Syntax Example For this key ring append the following caps if not exist: For Ceph Object Gateway, fetch the keyring from all the Ceph Object Gateway nodes: Syntax Example For this key ring append the following caps if not exist: On the Ansible administration node, create a file with all the keyrings fetched from the step: Example Optional: On each Ceph Monitor node, ensure that the monitor map is not available: Example From each Ceph Monitor node, fetch the MONITOR_ID , IP_ADDRESS_OF_MONITOR , and FSID from the etc/ceph/ceph.conf file: Example On the Ceph Monitor node, rebuild the monitor map: Syntax Example On the Ceph Monitor node, check the generated monitor map: Syntax Example On the Ceph Monitor node where we are recovering the monitors, rebuild the Ceph Monitor store from the collected map: Syntax In this example, the recovery is run on the mons-1 node. Example Change the ownership of monstore directory to ceph: Example On all the Ceph Monitor nodes, take a back-up of the corrupted store: Example On all the Ceph Monitor nodes, replace the corrupted store: Example On all the Ceph Monitor nodes, change the owner of the new store: Example On all the Ceph OSD nodes, start the OSDs: Example On all the Ceph Monitor nodes, start the monitors Example 4.8. Recovering the Ceph Monitor store Ceph Monitors store the cluster map in a key-value store such as LevelDB. If the store is corrupted on a Monitor, the Monitor terminates unexpectedly and fails to start again. The Ceph logs might include the following errors: Production Red Hat Ceph Storage clusters use at least three Ceph Monitors so that if one fails, it can be replaced with another one. However, under certain circumstances, all Ceph Monitors can have corrupted stores. For example, when the Ceph Monitor nodes have incorrectly configured disk or file system settings, a power outage can corrupt the underlying file system. If there is corruption on all Ceph Monitors, you can recover it with information stored on the OSD nodes by using utilities called ceph-monstore-tool and ceph-objectstore-tool . Important These procedures cannot recover the following information: Metadata Daemon Server (MDS) keyrings and maps Placement Group settings: full ratio set by using the ceph pg set_full_ratio command nearfull ratio set by using the ceph pg set_nearfull_ratio command Important Never restore the Ceph Monitor store from an old backup. Rebuild the Ceph Monitor store from the current cluster state using the following steps and restore from that. 4.8.1. Recovering the Ceph Monitor store when using BlueStore Follow this procedure if the Ceph Monitor store is corrupted on all Ceph Monitors and you use the BlueStore back end. In containerized environments, this method requires attaching Ceph repositories and restoring to a non-containerized Ceph Monitor first. Warning This procedure can cause data loss. If you are unsure about any step in this procedure, contact the Red Hat Technical Support for an assistance with the recovering process. Prerequisites Bare-metal deployments The rsync and ceph-test packages are installed. Container deployments All OSDs containers are stopped. Enable Ceph repositories on the Ceph nodes based on their roles. The ceph-test and rsync packages are installed on the OSD and Monitor nodes. The ceph-mon package is installed on the Monitor nodes. The ceph-osd package is installed on the OSD nodes. Procedure If you use Ceph in containers , mount all disk with Ceph data to a temporary location. Repeat this step for all OSD nodes. List the data partitions. Use ceph-volume or ceph-disk depending on which utility you used to set up the devices: or Mount the data partitions to temporary location: Restore the SELinux context: Replace OSD_ID with a numeric, space-separated list of Ceph OSD IDs on the OSD node. Change the owner and group to ceph:ceph : Replace OSD_ID with a numeric, space-separated list of Ceph OSD IDs on the OSD node. Important Due to a bug that causes the update-mon-db command to use additional db and db.slow directories for the Monitor database, you must also copy these directories. To do so: Prepare a temporary location outside the container to mount and access the OSD database and extract the OSD maps needed to restore the Ceph Monitor: Replace OSD-DATA with the Volume Group (VG) or Logical Volume (LV) path to the to the OSD data and OSD-ID with the ID of the OSD. Create a symbolic link between the BlueStore database and block.db : Replace BLUESTORE-DATABASE with the Volume Group (VG) or Logical Volume (LV) path to the BlueStore database and OSD-ID with the ID of the OSD. Use the following commands from the Ceph Monitor node with the corrupted store. Repeat them for all OSDs on all nodes. Collect the cluster map from all OSD nodes: Set the appropriate capabilities: Move all sst file from the db and db.slow directories to the temporary location: Rebuild the Monitor store from the collected map: Note After using this command, only keyrings extracted from the OSDs and the keyring specified on the ceph-monstore-tool command line are present in Ceph's authentication database. You have to recreate or import all other keyrings, such as clients, Ceph Manager, Ceph Object Gateway, and others, so those clients can access the cluster. Back up the corrupted store. Repeat this step for all Ceph Monitor nodes: Replace HOSTNAME with the host name of the Ceph Monitor node. Replace the corrupted store. Repeat this step for all Ceph Monitor nodes: Replace HOSTNAME with the host name of the Monitor node. Change the owner of the new store. Repeat this step for all Ceph Monitor nodes: Replace HOSTNAME with the host name of the Ceph Monitor node. If you use Ceph in containers , then unmount all the temporary mounted OSDs on all nodes: Start all the Ceph Monitor daemons: Ensure that the Monitors are able to form a quorum: Bare-metal deployments Containers Replace HOSTNAME with the host name of the Ceph Monitor node. Import the Ceph Manager keyring and start all Ceph Manager processes: Replace HOSTNAME with the host name of the Ceph Manager node. Start all OSD processes across all OSD nodes: Ensure that the OSDs are returning to service: Bare-metal deployments Containers Replace HOSTNAME with the host name of the Ceph Monitor node. Additional Resources For details on registering Ceph nodes to the Content Delivery Network (CDN), see Registering Red Hat Ceph Storage Nodes to the CDN and Attaching Subscriptions section in the Red Hat Ceph Storage Installation Guide . For details on enabling repositories, see the Enabling the Red Hat Ceph Storage Repositories section in the Red Hat Ceph Storage Installation Guide . 4.9. Additional Resources See Chapter 3, Troubleshooting networking issues in the Red Hat Ceph Storage Troubleshooting Guide for network-related problems.
[ "HEALTH_WARN 1 mons down, quorum 1,2 mon.b,mon.c mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum)", "systemctl status ceph-mon@ HOST_NAME systemctl start ceph-mon@ HOST_NAME", "Corruption: error in middle of record Corruption: 1 missing files; e.g.: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb", "Caught signal (Bus error)", "ceph daemon ID mon_status", "ceph daemon mon.a mon_status", "mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum) mon.a addr 127.0.0.1:6789/0 clock skew 0.08235s > max 0.05s (latency 0.0045s)", "2015-06-04 07:28:32.035795 7f806062e700 0 log [WRN] : mon.a 127.0.0.1:6789/0 clock skew 0.14s > max 0.05s 2015-06-04 04:31:25.773235 7f4997663700 0 log [WRN] : message from mon.1 was stamped 0.186257s in the future, clocks not synchronized", "mon.ceph1 store is getting too big! 48031 MB >= 15360 MB -- 62% avail", "du -sch /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME /store.db", "du -sch /var/lib/ceph/mon/ceph-host1/store.db 47G /var/lib/ceph/mon/ceph-ceph1/store.db/ 47G total", "{ \"name\": \"mon.3\", \"rank\": 2, \"state\": \"peon\", \"election_epoch\": 96, \"quorum\": [ 1, 2 ], \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 1, \"fsid\": \"d5552d32-9d1d-436c-8db1-ab5fc2c63cd0\", \"modified\": \"0.000000\", \"created\": \"0.000000\", \"mons\": [ { \"rank\": 0, \"name\": \"mon.1\", \"addr\": \"172.25.1.10:6789\\/0\" }, { \"rank\": 1, \"name\": \"mon.2\", \"addr\": \"172.25.1.12:6789\\/0\" }, { \"rank\": 2, \"name\": \"mon.3\", \"addr\": \"172.25.1.13:6789\\/0\" } ] } }", "ceph mon getmap -o /tmp/monmap", "systemctl stop ceph-mon@<host-name>", "systemctl stop ceph-mon@host1", "ceph-mon -i ID --extract-monmap /tmp/monmap", "ceph-mon -i mon.a --extract-monmap /tmp/monmap", "systemctl stop ceph-mon@ HOST_NAME", "systemctl stop ceph-mon@host2", "su - ceph -c 'ceph-mon -i ID --inject-monmap /tmp/monmap'", "su - ceph -c 'ceph-mon -i mon.c --inject-monmap /tmp/monmap'", "ceph-mon -i ID --inject-monmap /tmp/monmap", "ceph-mon -i mon.c --inject-monmap /tmp/monmap", "chown -R ceph:ceph /var/lib/ceph/mon/ceph-c/", "systemctl start ceph-mon@host2", "systemctl start ceph-mon@host1", "rm -rf /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME", "rm -rf /var/lib/ceph/mon/remote-host1", "ceph mon remove SHORT_HOST_NAME --cluster CLUSTER_NAME", "ceph mon remove host1 --cluster remote", "/usr/share/ceph-ansible/ansible-playbook site.yml", "ceph tell mon. HOST_NAME compact", "ceph tell mon.host1 compact", "[mon] mon_compact_on_start = true", "systemctl restart ceph-mon@_HOST_NAME_", "systemctl restart ceph-mon@host1", "ceph mon stat", "systemctl status ceph-mon@ HOST_NAME systemctl stop ceph-mon@ HOST_NAME", "systemctl status ceph-mon@host1 systemctl stop ceph-mon@host1", "su - ceph -c 'ceph-monstore-tool /var/lib/ceph/mon/mon. HOST_NAME compact'", "su - ceph -c 'ceph-monstore-tool /var/lib/ceph/mon/mon.node1 compact'", "ceph-monstore-tool /var/lib/ceph/mon/mon. HOST_NAME compact", "ceph-monstore-tool /var/lib/ceph/mon/mon.node1 compact", "chown -R ceph:ceph /var/lib/ceph/mon/mon.node1", "systemctl start ceph-mon@ HOST_NAME", "systemctl start ceph-mon@host1", "firewall-cmd --add-port 6800-7300/tcp firewall-cmd --add-port 6800-7300/tcp --permanent", "ssh-keygen", "ssh-copy-id root@osds-1 ssh-copy-id root@osds-2 ssh-copy-id root@osds-3", "sudo systemctl stop ceph-osd\\*.service ceph-osd.target", "touch recover.sh", "-------------------------------------------------------------------------- NOTE: The directory names specified by 'ms', 'db', and 'db_slow' must end with a trailing / otherwise rsync will not operate properly. -------------------------------------------------------------------------- ms=/tmp/monstore/ db=/root/db/ db_slow=/root/db.slow/ mkdir -p USDms USDdb USDdb_slow -------------------------------------------------------------------------- NOTE: Replace the contents inside double quotes for 'osd_nodes' below with the list of OSD nodes in the environment. -------------------------------------------------------------------------- osd_nodes=\" OSD_NODES_1 OSD_NODES_2 OSD_NODES_3 ...\" for osd_node in USDosd_nodes; do echo \"Operating on USDosd_node\" rsync -avz --delete USDms USDosd_node:USDms rsync -avz --delete USDdb USDosd_node:USDdb rsync -avz --delete USDdb_slow USDosd_node:USDdb_slow ssh -t USDosd_node <<EOF for osd in /var/lib/ceph/osd/ceph-*; do ceph-objectstore-tool --type bluestore --data-path \\USDosd --op update-mon-db --no-mon-config --mon-store-path USDms if [ -e \\USDosd/keyring ]; then cat \\USDosd/keyring >> USDms/keyring echo ' caps mgr = \"allow profile osd\"' >> USDms/keyring echo ' caps mon = \"allow profile osd\"' >> USDms/keyring echo ' caps osd = \"allow *\"' >> USDms/keyring else echo WARNING: \\USDosd on USDosd_node does not have a local keyring. fi done EOF rsync -avz --delete --remove-source-files USDosd_node:USDms USDms rsync -avz --delete --remove-source-files USDosd_node:USDdb USDdb rsync -avz --delete --remove-source-files USDosd_node:USDdb_slow USDdb_slow done -------------------------------------------------------------------------- End of script ## --------------------------------------------------------------------------", "chmod 755 recover.sh", "./recovery.sh", "cat /var/lib/ceph/mon/ceph- MONITOR_NODE /keyring", "cat /var/lib/ceph/mon/ceph-mons-1/keyring", "cat /var/lib/ceph/mgr/ceph- MANAGER_NODE /keyring", "cat /var/lib/ceph/mgr/ceph-mons-1/keyring", "cat /etc/ceph/ CLIENT_KEYRING", "cat /etc/ceph/ceph.client.admin.keyring", "cat /var/lib/ceph/mds/ceph- MDS_NODE /keyring", "cat /var/lib/ceph/mds/ceph-mds-1/keyring", "caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\"", "cat /var/lib/ceph/radosgw/ceph- CEPH_OBJECT_GATEWAY_NODE /keyring", "cat /var/lib/ceph/radosgw/ceph-rgw-1/keyring", "caps mon = \"allow rw\" caps osd = \"allow *\"", "cat /tmp/monstore/keyring [mon.] key = AQAa+RxhAAAAABAApmwud0GQHX0raMBc9zTAYQ== caps mon = \"allow *\" [client.admin] key = AQAo+RxhcYWtGBAAiY4kKltMGnAXqPLM2A+B8w== caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\" [mgr.mons-1] key = AQA++RxhAAAAABAAKdG1ETTEMR8KPa9ZpfcIzw== caps mds = \"allow *\" caps mon = \"allow profile mgr\" caps osd = \"allow *\" [mgr.mons-2] key = AQA9+RxhAAAAABAAcCBxsoaIl0sdHTz3dqX4SQ== caps mds = \"allow *\" caps mon = \"allow profile mgr\" caps osd = \"allow *\" [mgr.mons-3] key = AQA/+RxhAAAAABAAwe/mwv0hS79fWP+00W6ypQ== caps mds = \"allow *\" caps mon = \"allow profile mgr\" caps osd = \"allow *\" [osd.1] key = AQB/+RxhlH8rFxAAL3mb8Kdb+QuWWdJi+RvwGw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.5] key = AQCE+RxhKSNsHRAAIyLO5g75tqFVsl6MEEzwXw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.8] key = AQCJ+Rxhc0wHJhAA5Bb2kU9Nadpm3UCLASnCfw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.2] key = AQB/+RxhhrQCGRAAUhh77gIVhN8zsTbaKMJuHw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.4] key = AQCE+Rxh0mDxDRAApAeqKOJycW5bpP3IuAhSMw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.7] key = AQCJ+Rxhn+RAIhAAp1ImK1jiazBsDpmTQvVEVw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.0] key = AQB/+RxhPhh+FRAAc5b0nwiuK6o1AIbjVc6tQg== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.3] key = AQCE+RxhJv8PARAAqCzH2br1xJmMTNnqH3I9mA== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.6] key = AQCI+RxhAt4eIhAAYQEJqSNRT7l2WNl/rYQcKQ== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.1] key = AQB/+RxhlH8rFxAAL3mb8Kdb+QuWWdJi+RvwGw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.5] key = AQCE+RxhKSNsHRAAIyLO5g75tqFVsl6MEEzwXw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.8] key = AQCJ+Rxhc0wHJhAA5Bb2kU9Nadpm3UCLASnCfw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.2] key = AQB/+RxhhrQCGRAAUhh77gIVhN8zsTbaKMJuHw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.0] key = AQB/+RxhPhh+FRAAc5b0nwiuK6o1AIbjVc6tQg== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.3] key = AQCE+RxhJv8PARAAqCzH2br1xJmMTNnqH3I9mA== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.6] key = AQCI+RxhAt4eIhAAYQEJqSNRT7l2WNl/rYQcKQ== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [mds.mds-1] key = AQDs+RxhAF9vERAAdn6ArdUJ31RLr2sBVkzp3A== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [mds.mds-2] key = AQDs+RxhROoAFxAALAgMfM45wC5ht/vSFN2EzQ== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [mds.mds-3] key = AQDs+Rxhd092FRAArXLIHAhMp2z9zcWDCSoIDQ== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [client.rgw.rgws-1.rgw0] key = AQD9+Rxh0iP2MxAAYY76Js1AaZhzFG44cvcyOw== caps mon = \"allow rw\" caps osd = \"allow *\"", "ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap monmaptool /tmp/monmap --print monmaptool: monmap file /tmp/monmap monmaptool: couldn't open /tmp/monmap: (2) No such file or directory Notice theNo such file or directory error message if monmap is missed Notice that the \"No such file or directory\" error message if monmap is missed", "[global] cluster network = 10.0.208.0/22 fsid = 9877bde8-ccb2-4758-89c3-90ca9550ffea mon host = [v2:10.0.211.00:3300,v1:10.0.211.00:6789],[v2:10.0.211.13:3300,v1:10.0.211.13:6789],[v2:10.0.210.13:3300,v1:10.0.210.13:6789] mon initial members = ceph-mons-1, ceph-mons-2, ceph-mons-3", "monmaptool --create --addv MONITOR_ID IP_ADDRESS_OF_MONITOR --enable-all-features --clobber PATH_OF_MONITOR_MAP --fsid FSID", "monmaptool --create --addv mons-1 [v2:10.74.177.30:3300,v1:10.74.177.30:6789] --addv mons-2 [v2:10.74.179.197:3300,v1:10.74.179.197:6789] --addv mons-3 [v2:10.74.182.123:3300,v1:10.74.182.123:6789] --enable-all-features --clobber /root/monmap.mons-1 --fsid 6c01cb34-33bf-44d0-9aec-3432276f6be8 monmaptool: monmap file /root/monmap.mons-1 monmaptool: set fsid to 6c01cb34-33bf-44d0-9aec-3432276f6be8 monmaptool: writing epoch 0 to /root/monmap.mon-a (3 monitors)", "monmaptool PATH_OF_MONITOR_MAP --print", "monmaptool /root/monmap.mons-1 --print monmaptool: monmap file /root/monmap.mons-1 epoch 0 fsid 6c01cb34-33bf-44d0-9aec-3432276f6be8 last_changed 2021-11-23 02:57:23.235505 created 2021-11-23 02:57:23.235505 min_mon_release 0 (unknown) election_strategy: 1 0: [v2:10.74.177.30:3300/0,v1:10.74.177.30:6789/0] mon.mons-1 1: [v2:10.74.179.197:3300/0,v1:10.74.179.197:6789/0] mon.mons-2 2: [v2:10.74.182.123:3300/0,v1:10.74.182.123:6789/0] mon.mons-3", "ceph-monstore-tool /tmp/monstore rebuild -- --keyring KEYRING_PATH --monmap PATH_OF_MONITOR_MAP", "ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/monstore/keyring --monmap /root/monmap.mons-1", "chown -R ceph:ceph /tmp/monstore", "mv /var/lib/ceph/mon/ceph-mons-1/store.db /var/lib/ceph/mon/ceph-mons-1/store.db.corrupted", "scp -r /tmp/monstore/store.db mons-1:/var/lib/ceph/mon/ceph-mons-1/", "chown -R ceph:ceph /var/lib/ceph/mon/ceph-HOSTNAME/store.db", "sudo systemctl start ceph-osd.target", "sudo systemctl start ceph-mon.target", "Corruption: error in middle of record Corruption: 1 missing files; e.g.: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb", "ceph-volume lvm list", "ceph-disk list", "mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-USDi", "for i in { OSD_ID }; do restorecon /var/lib/ceph/osd/ceph-USDi; done", "for i in { OSD_ID }; do chown -R ceph:ceph /var/lib/ceph/osd/ceph-USDi; done", "ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev OSD-DATA --path /var/lib/ceph/osd/ceph- OSD-ID", "ln -snf BLUESTORE DATABASE /var/lib/ceph/osd/ceph- OSD-ID /block.db", "cd /root/ ms=/tmp/monstore/ db=/root/db/ db_slow=/root/db.slow/ mkdir USDms for host in USDosd_nodes; do echo \"USDhost\" rsync -avz USDms USDhost:USDms rsync -avz USDdb USDhost:USDdb rsync -avz USDdb_slow USDhost:USDdb_slow rm -rf USDms rm -rf USDdb rm -rf USDdb_slow sh -t USDhost <<EOF for osd in /var/lib/ceph/osd/ceph-*; do ceph-objectstore-tool --type bluestore --data-path \\USDosd --op update-mon-db --mon-store-path USDms done EOF rsync -avz USDhost:USDms USDms rsync -avz USDhost:USDdb USDdb rsync -avz USDhost:USDdb_slow USDdb_slow done", "ceph-authtool /etc/ceph/ceph.client.admin.keyring -n mon. --cap mon 'allow *' --gen-key cat /etc/ceph/ceph.client.admin.keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"", "mv /root/db/*.sst /root/db.slow/*.sst /tmp/monstore/store.db", "ceph-monstore-tool /tmp/monstore rebuild -- --keyring /etc/ceph/ceph.client.admin", "mv /var/lib/ceph/mon/ceph- HOSTNAME /store.db /var/lib/ceph/mon/ceph- HOSTNAME /store.db.corrupted", "scp -r /tmp/monstore/store.db HOSTNAME :/var/lib/ceph/mon/ceph- HOSTNAME /", "chown -R ceph:ceph /var/lib/ceph/mon/ceph- HOSTNAME /store.db", "umount /var/lib/ceph/osd/ceph-*", "systemctl start ceph-mon *", "ceph -s", "[user@admin ~]USD docker exec ceph-mon-_HOSTNAME_ ceph -s", "ceph auth import -i /etc/ceph/ceph.mgr. HOSTNAME .keyring systemctl start ceph-mgr@ HOSTNAME", "systemctl start ceph-osd *", "ceph -s", "[user@admin ~]USD docker exec ceph-mon-_HOSTNAME_ ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/troubleshooting_guide/troubleshooting-ceph-monitors
C.12. LockManager
C.12. LockManager org.infinispan.util.concurrent.locks.LockManagerImpl The LockManager component handles MVCC locks for entries. Table C.21. Attributes Name Description Type Writable concurrencyLevel The concurrency level that the MVCC Lock Manager has been configured with. int No numberOfLocksAvailable The number of exclusive locks that are available. int No numberOfLocksHeld The number of exclusive locks that are held. int No 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/lockmanager
Chapter 12. Installing on Nutanix
Chapter 12. Installing on Nutanix If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and dynamically provisioning storage containers with the Nutanix Container Storage Interface (CSI). Important To deploy an OpenShift Container Platform cluster and maintain its daily operation, you need access to a Nutanix account with the necessary environment requirements. For details, see Environment requirements . 12.1. Adding hosts on Nutanix with the UI To add hosts on Nutanix with the user interface (UI), generate the minimal discovery image ISO from the Assisted Installer. This downloads a smaller image that will fetch the data needed to boot a host with networking and is the default setting. The majority of the content downloads upon boot. The ISO image is about 100MB in size. After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines. Prerequisites You have created a cluster profile in the Assisted Installer UI. You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name. Procedure In the Cluster details page, select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional. In the Host discovery page, click the Add hosts button. Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key or drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Select the required provisioning type. Note Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot. In Networking , select Cluster-managed networking . Nutanix does not support User-managed networking . Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Note The proxy username and password must be URL-encoded. Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details. Click Generate Discovery ISO . Copy the Discovery ISO URL . In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer . In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central . Enter the Name . For example, control-plane or master . Enter the Number of VMs . This should be 3, 4, or 5 for the control plane. Ensure the remaining settings meet the minimum requirements for control plane hosts. In the Nutanix Prism UI, create the worker VMs through Prism Central . Enter the Name . For example, worker . Enter the Number of VMs . You should create at least 2 worker nodes. Ensure the remaining settings meet the minimum requirements for worker hosts. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status. Continue with the installation procedure. 12.2. Adding hosts on Nutanix with the API To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines. Prerequisites You have set up the Assisted Installer API authentication. You have created an Assisted Installer cluster profile. You have created an Assisted Installer infrastructure environment. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have completed the Assisted Installer cluster configuration. You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name. Procedure Configure the discovery image if you want it to boot with an ignition file. Create a Nutanix cluster configuration file to hold the environment variables: USD touch ~/nutanix-cluster-env.sh USD chmod +x ~/nutanix-cluster-env.sh If you have to start a new terminal session, you can reload the environment variables easily. For example: USD source ~/nutanix-cluster-env.sh Assign the Nutanix cluster's name to the NTX_CLUSTER_NAME environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF Replace <cluster_name> with the name of the Nutanix cluster. Assign the Nutanix cluster's subnet name to the NTX_SUBNET_NAME environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF Replace <subnet_name> with the name of the Nutanix cluster's subnet. Refresh the API token: USD source refresh-token Get the download URL: USD curl -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url Create the Nutanix image configuration file: USD cat << EOF > create-image.json { "spec": { "name": "ocp_ai_discovery_image.iso", "description": "ocp_ai_discovery_image.iso", "resources": { "architecture": "X86_64", "image_type": "ISO_IMAGE", "source_uri": "<image_url>", "source_options": { "allow_insecure_connection": true } } }, "metadata": { "spec_version": 3, "kind": "image" } } EOF Replace <image_url> with the image URL downloaded from the step. Create the Nutanix image: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/images \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./create-image.json | jq '.metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Assign the returned UUID to the NTX_IMAGE_UUID environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF Get the Nutanix cluster UUID: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "cluster" }' | jq '.entities[] | select(.spec.name=="<nutanix_cluster_name>") | .metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <nutanix_cluster_name> with the name of the Nutanix cluster. Assign the returned Nutanix cluster UUID to the NTX_CLUSTER_UUID environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF Replace <uuid> with the returned UUID of the Nutanix cluster. Get the Nutanix cluster's subnet UUID: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "subnet", "filter": "name==<subnet_name>" }' | jq '.entities[].metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <subnet_name> with the name of the cluster's subnet. Assign the returned Nutanix subnet UUID to the NTX_CLUSTER_UUID environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF Replace <uuid> with the returned UUID of the cluster subnet. Ensure the Nutanix environment variables are set: USD source ~/nutanix-cluster-env.sh Create a VM configuration file for each Nutanix host. Create three to five control plane (master) VMs and at least two worker VMs. For example: USD touch create-master-0.json USD cat << EOF > create-master-0.json { "spec": { "name": "<host_name>", "resources": { "power_state": "ON", "num_vcpus_per_socket": 1, "num_sockets": 16, "memory_size_mib": 32768, "disk_list": [ { "disk_size_mib": 122880, "device_properties": { "device_type": "DISK" } }, { "device_properties": { "device_type": "CDROM" }, "data_source_reference": { "kind": "image", "uuid": "USDNTX_IMAGE_UUID" } } ], "nic_list": [ { "nic_type": "NORMAL_NIC", "is_connected": true, "ip_endpoint_list": [ { "ip_type": "DHCP" } ], "subnet_reference": { "kind": "subnet", "name": "USDNTX_SUBNET_NAME", "uuid": "USDNTX_SUBNET_UUID" } } ], "guest_tools": { "nutanix_guest_tools": { "state": "ENABLED", "iso_mount_state": "MOUNTED" } } }, "cluster_reference": { "kind": "cluster", "name": "USDNTX_CLUSTER_NAME", "uuid": "USDNTX_CLUSTER_UUID" } }, "api_version": "3.1.0", "metadata": { "kind": "vm" } } EOF Replace <host_name> with the name of the host. Boot each Nutanix virtual machine: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./<vm_config_file_name> | jq '.metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <vm_config_file_name> with the name of the VM configuration file. Assign the returned VM UUID to a unique environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF Replace <uuid> with the returned UUID of the VM. Note The environment variable must have a unique name for each VM. Wait until the Assisted Installer has discovered each VM and they have passed validation. USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" --header "Content-Type: application/json" -H "Authorization: Bearer USDAPI_TOKEN" | jq '.enabled_host_count' Modify the cluster definition to enable integration with Nutanix: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "platform_type":"nutanix" } ' | jq Continue with the installation procedure. 12.3. Nutanix postinstallation configuration Follow the steps below to complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider. Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . You have access to the Red Hat OpenShift Container Platform command line interface. Note By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can host the RHCOS image on any HTTP server and point the installation program to the image or you can use Prism Central to upload the image manually. 12.3.1. Updating the Nutanix configuration settings After installing OpenShift Container Platform on the Nutanix platform using the Assisted Installer, you must update the following Nutanix configuration settings manually: <prismcentral_username> : The Nutanix Prism Central username. <prismcentral_password> : The Nutanix Prism Central password. <prismcentral_address> : The Nutanix Prism Central address. <prismcentral_port> : The Nutanix Prism Central port. <prismelement_username> : The Nutanix Prism Element username. <prismelement_password> : The Nutanix Prism Element password. <prismelement_address> : The Nutanix Prism Element address. <prismelement_port> : The Nutanix Prism Element port. <prismelement_clustername> : The Nutanix Prism Element cluster name. <nutanix_storage_container> : The Nutanix Prism storage container. Note Optional: You can define prism category key and value pairs. These category key-value pairs must exist in Prism Central and can be defined in separate categories for compute nodes, control plane nodes, or all nodes. Procedure In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings: USD oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { "spec": { "platformSpec": { "nutanix": { "prismCentral": { "address": "<prismcentral_address>", "port": <prismcentral_port> }, "prismElements": [ { "endpoint": { "address": "<prismelement_address>", "port": <prismelement_port> }, "name": "<prismelement_clustername>" } ] }, "type": "Nutanix" } } } EOF Example output infrastructure.config.openshift.io/cluster patched For additional details, see Creating a machine set on Nutanix . Create the Nutanix secret: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{"type":"basic_auth","data":{"prismCentral":{"username":"USD{<prismcentral_username>}","password":"USD{<prismcentral_password>}"},"prismElements":null}}] EOF Example output secret/nutanix-credentials created When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration: Get the Nutanix cloud provider configuration YAML file: USD oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml Create a backup of the configuration file: USD cp cloud-provider-config_backup.yaml cloud-provider-config.yaml Open the configuration YAML file: USD vi cloud-provider-config.yaml Edit the configuration YAML file as follows: kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { "prismCentral": { "address": "<prismcentral_address>", "port":<prismcentral_port>, "credentialRef": { "kind": "Secret", "name": "nutanix-credentials", "namespace": "openshift-cloud-controller-manager" } }, "topologyDiscovery": { "type": "Prism", "topologyCategories": null }, "enableCustomLabeling": true } Apply the configuration updates: USD oc apply -f cloud-provider-config.yaml Example output Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured 12.3.2. Creating the Nutanix CSI Operator group Create an Operator group for the Nutanix CSI Operator. Note For a description of operator groups and related concepts, see Common Operator Framework Terms in Additional Resources . Procedure Open the Nutanix CSI Operator Group YAML file: USD vi openshift-cluster-csi-drivers-operator-group.yaml Edit the YAML file as follows: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default Create the Operator Group: USD oc create -f openshift-cluster-csi-drivers-operator-group.yaml Example output 12.3.3. Installing the Nutanix CSI Operator The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver. Note For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the Operator section of the Nutanix CSI Operator document in Additional Resources . Procedure Get the parameter values for the Nutanix CSI Operator YAML file: Check that the Nutanix CSI Operator exists: USD oc get packagemanifests | grep nutanix Example output Assign the default channel for the Operator to a BASH variable: USD DEFAULT_CHANNEL=USD(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel}) Assign the starting cluster service version (CSV) for the Operator to a BASH variable: USD STARTING_CSV=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.channels[*].currentCSV\}) Assign the catalog source for the subscription to a BASH variable: USD CATALOG_SOURCE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSource\}) Assign the Nutanix CSI Operator source namespace to a BASH variable: USD SOURCE_NAMESPACE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSourceNamespace\}) Create the Nutanix CSI Operator YAML file using the BASH variables: USD cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: USDDEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: USDCATALOG_SOURCE sourceNamespace: USDSOURCE_NAMESPACE startingCSV: USDSTARTING_CSV EOF Create the CSI Nutanix Operator: USD oc apply -f nutanixcsioperator.yaml Example output subscription.operators.coreos.com/nutanixcsioperator created Run the following command until the Operator subscription state changes to AtLatestKnown . This indicates that the Operator subscription has been created, and may take some time. USD oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}' 12.3.4. Deploying the Nutanix CSI storage driver The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications. Note For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator document in Additional Resources . Procedure Create a NutanixCsiStorage resource to deploy the driver: USD cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF Example output snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created Create a Nutanix secret YAML file for the CSI storage driver: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> 1 EOF Note 1 Replace these parameters with actual values while keeping the same format. Example output secret/nutanix-secret created 12.3.5. Validating the postinstallation configurations Run the following steps to validate the configuration. Procedure Verify that you can create a storage class: USD cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> 1 csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF Note 1 Take <nutanix_storage_container> from the Nutanix configuration; for example, SelfServiceContainer. Example output storageclass.storage.k8s.io/nutanix-volume created Verify that you can create the Nutanix persistent volume claim (PVC): Create the persistent volume claim (PVC): USD cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF Example output persistentvolumeclaim/nutanix-volume-pvc created Validate that the persistent volume claim (PVC) status is Bound: USD oc get pvc -n openshift-cluster-csi-drivers Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s Additional resources Creating a machine set on Nutanix . Nutanix CSI Operator Storage Management Common Operator Framework terms
[ "touch ~/nutanix-cluster-env.sh", "chmod +x ~/nutanix-cluster-env.sh", "source ~/nutanix-cluster-env.sh", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF", "source refresh-token", "curl -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url", "cat << EOF > create-image.json { \"spec\": { \"name\": \"ocp_ai_discovery_image.iso\", \"description\": \"ocp_ai_discovery_image.iso\", \"resources\": { \"architecture\": \"X86_64\", \"image_type\": \"ISO_IMAGE\", \"source_uri\": \"<image_url>\", \"source_options\": { \"allow_insecure_connection\": true } } }, \"metadata\": { \"spec_version\": 3, \"kind\": \"image\" } } EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/images -H 'accept: application/json' -H 'Content-Type: application/json' -d @./create-image.json | jq '.metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ \"kind\": \"cluster\" }' | jq '.entities[] | select(.spec.name==\"<nutanix_cluster_name>\") | .metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ \"kind\": \"subnet\", \"filter\": \"name==<subnet_name>\" }' | jq '.entities[].metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF", "source ~/nutanix-cluster-env.sh", "touch create-master-0.json", "cat << EOF > create-master-0.json { \"spec\": { \"name\": \"<host_name>\", \"resources\": { \"power_state\": \"ON\", \"num_vcpus_per_socket\": 1, \"num_sockets\": 16, \"memory_size_mib\": 32768, \"disk_list\": [ { \"disk_size_mib\": 122880, \"device_properties\": { \"device_type\": \"DISK\" } }, { \"device_properties\": { \"device_type\": \"CDROM\" }, \"data_source_reference\": { \"kind\": \"image\", \"uuid\": \"USDNTX_IMAGE_UUID\" } } ], \"nic_list\": [ { \"nic_type\": \"NORMAL_NIC\", \"is_connected\": true, \"ip_endpoint_list\": [ { \"ip_type\": \"DHCP\" } ], \"subnet_reference\": { \"kind\": \"subnet\", \"name\": \"USDNTX_SUBNET_NAME\", \"uuid\": \"USDNTX_SUBNET_UUID\" } } ], \"guest_tools\": { \"nutanix_guest_tools\": { \"state\": \"ENABLED\", \"iso_mount_state\": \"MOUNTED\" } } }, \"cluster_reference\": { \"kind\": \"cluster\", \"name\": \"USDNTX_CLUSTER_NAME\", \"uuid\": \"USDNTX_CLUSTER_UUID\" } }, \"api_version\": \"3.1.0\", \"metadata\": { \"kind\": \"vm\" } } EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' -H 'accept: application/json' -H 'Content-Type: application/json' -d @./<vm_config_file_name> | jq '.metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"platform_type\":\"nutanix\" } ' | jq", "oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { \"spec\": { \"platformSpec\": { \"nutanix\": { \"prismCentral\": { \"address\": \"<prismcentral_address>\", \"port\": <prismcentral_port> }, \"prismElements\": [ { \"endpoint\": { \"address\": \"<prismelement_address>\", \"port\": <prismelement_port> }, \"name\": \"<prismelement_clustername>\" } ] }, \"type\": \"Nutanix\" } } } EOF", "infrastructure.config.openshift.io/cluster patched", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{\"type\":\"basic_auth\",\"data\":{\"prismCentral\":{\"username\":\"USD{<prismcentral_username>}\",\"password\":\"USD{<prismcentral_password>}\"},\"prismElements\":null}}] EOF", "secret/nutanix-credentials created", "oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml", "cp cloud-provider-config_backup.yaml cloud-provider-config.yaml", "vi cloud-provider-config.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { \"prismCentral\": { \"address\": \"<prismcentral_address>\", \"port\":<prismcentral_port>, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }", "oc apply -f cloud-provider-config.yaml", "Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured", "vi openshift-cluster-csi-drivers-operator-group.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default", "oc create -f openshift-cluster-csi-drivers-operator-group.yaml", "operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created", "oc get packagemanifests | grep nutanix", "nutanixcsioperator Certified Operators 129m", "DEFAULT_CHANNEL=USD(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})", "STARTING_CSV=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.channels[*].currentCSV\\})", "CATALOG_SOURCE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.catalogSource\\})", "SOURCE_NAMESPACE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.catalogSourceNamespace\\})", "cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: USDDEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: USDCATALOG_SOURCE sourceNamespace: USDSOURCE_NAMESPACE startingCSV: USDSTARTING_CSV EOF", "oc apply -f nutanixcsioperator.yaml", "subscription.operators.coreos.com/nutanixcsioperator created", "oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'", "cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF", "snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> 1 EOF", "secret/nutanix-secret created", "cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> 1 csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF", "storageclass.storage.k8s.io/nutanix-volume created", "cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF", "persistentvolumeclaim/nutanix-volume-pvc created", "oc get pvc -n openshift-cluster-csi-drivers", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_installing-on-nutanix
Chapter 3. Themes
Chapter 3. Themes Red Hat build of Keycloak provides theme support for web pages and emails. This allows customizing the look and feel of end-user facing pages so they can be integrated with your applications. Figure 3.1. Login page with sunrise example theme 3.1. Theme types A theme can provide one or more types to customize different aspects of Red Hat build of Keycloak. The types available are: Account - Account Console Admin - Admin Console Email - Emails Login - Login forms Welcome - Welcome page 3.2. Configuring a theme All theme types, except welcome, are configured through the Admin Console. Procedure Log into the Admin Console. Select your realm from the drop-down box in the top left corner. Click Realm Settings from the menu. Click the Themes tab. Note To set the theme for the master Admin Console you need to set the Admin Console theme for the master realm. To see the changes to the Admin Console refresh the page. Change the welcome theme by using the spi-theme-welcome-theme option. For example: bin/kc.[sh|bat] start --spi-theme-welcome-theme=custom-theme 3.3. Default themes Red Hat build of Keycloak comes bundled with default themes in the JAR file keycloak-themes-24.0.10.redhat-00001.jar inside the server distribution. The server's root themes directory does not contain any themes by default, but it contains a README file with some additional details about the default themes. To simplify upgrading, do not edit the bundled themes directly. Instead create your own theme that extends one of the bundled themes. 3.4. Creating a theme A theme consists of: HTML templates ( Freemarker Templates ) Images Message bundles Stylesheets Scripts Theme properties Unless you plan to replace every single page you should extend another theme. Most likely you will want to extend some existing theme. Alternatively, if you intend to provide your own implementation of the admin or account console, consider extending the base theme. The base theme consists of a message bundle and therefore such implementation needs to start from scratch, including implementation of the main index.ftl Freemarker template, but it can leverage existing translations from the message bundle. When extending a theme you can override individual resources (templates, stylesheets, etc.). If you decide to override HTML templates bear in mind that you may need to update your custom template when upgrading to a new release. While creating a theme it's a good idea to disable caching as this makes it possible to edit theme resources directly from the themes directory without restarting Red Hat build of Keycloak. Procedure Run Keycloak with the following options: bin/kc.[sh|bat] start --spi-theme-static-max-age=-1 --spi-theme-cache-themes=false --spi-theme-cache-templates=false Create a directory in the themes directory. The name of the directory becomes the name of the theme. For example to create a theme called mytheme create the directory themes/mytheme . Inside the theme directory, create a directory for each of the types your theme is going to provide. For example, to add the login type to the mytheme theme, create the directory themes/mytheme/login . For each type create a file theme.properties which allows setting some configuration for the theme. For example, to configure the theme themes/mytheme/login to extend the base theme and import some common resources, create the file themes/mytheme/login/theme.properties with following contents: You have now created a theme with support for the login type. Log into the Admin Console to check out your new theme Select your realm Click Realm Settings from the menu. Click on the Themes tab. For Login Theme select mytheme and click Save . Open the login page for the realm. You can do this either by logging in through your application or by opening the Account Console ( /realms/{realm name}/account ). To see the effect of changing the parent theme, set parent=keycloak in theme.properties and refresh the login page. Note Be sure to re-enable caching in production as it will significantly impact performance. Note If you want to manually delete the content of the themes cache, you can do so by deleting the data/tmp/kc-gzip-cache directory of the server distribution. It can be useful for instance if you redeployed custom providers or custom themes without disabling themes caching in the server executions. 3.4.1. Theme properties Theme properties are set in the file <THEME TYPE>/theme.properties in the theme directory. parent - Parent theme to extend import - Import resources from another theme common - Override the common resource path. The default value is common/keycloak when not specified. This value would be used as value of suffix of USD{url.resourcesCommonPath} , which is used typically in freemarker templates (prefix of USD{url.resoucesCommonPath} value is theme root uri). styles - Space-separated list of styles to include locales - Comma-separated list of supported locales There are a list of properties that can be used to change the css class used for certain element types. For a list of these properties look at the theme.properties file in the corresponding type of the keycloak theme ( themes/keycloak/<THEME TYPE>/theme.properties ). You can also add your own custom properties and use them from custom templates. When doing so, you can substitute system properties or environment variables by using these formats: USD{some.system.property} - for system properties USD{env.ENV_VAR} - for environment variables. A default value can also be provided in case the system property or the environment variable is not found with USD{foo:defaultValue} . Note If no default value is provided and there's no corresponding system property or environment variable, then nothing is replaced and you end up with the format in your template. Here's an example of what is possible: javaVersion=USD{java.version} unixHome=USD{env.HOME:Unix home not found} windowsHome=USD{env.HOMEPATH:Windows home not found} 3.4.2. Add a stylesheet to a theme You can add one or more stylesheets to a theme. Procedure Create a file in the <THEME TYPE>/resources/css directory of your theme. Add this file to the styles property in theme.properties . For example, to add styles.css to the mytheme , create themes/mytheme/login/resources/css/styles.css with the following content: .login-pf body { background: DimGrey none; } Edit themes/mytheme/login/theme.properties and add: To see the changes, open the login page for your realm. You will notice that the only styles being applied are those from your custom stylesheet. To include the styles from the parent theme, load the styles from that theme. Edit themes/mytheme/login/theme.properties and change styles to: Note To override styles from the parent stylesheets, ensure that your stylesheet is listed last. 3.4.3. Adding a script to a theme You can add one or more scripts to a theme. Procedure Create a file in the <THEME TYPE>/resources/js directory of your theme. Add the file to the scripts property in theme.properties . For example, to add script.js to the mytheme , create themes/mytheme/login/resources/js/script.js with the following content: alert('Hello'); Then edit themes/mytheme/login/theme.properties and add: 3.4.4. Adding an image to a theme To make images available to the theme add them to the <THEME TYPE>/resources/img directory of your theme. These can be used from within stylesheets or directly in HTML templates. For example to add an image to the mytheme copy an image to themes/mytheme/login/resources/img/image.jpg . You can then use this image from within a custom stylesheet with: body { background-image: url('../img/image.jpg'); background-size: cover; } Or to use directly in HTML templates add the following to a custom HTML template: <img src="USD{url.resourcesPath}/img/image.jpg" alt="My image description"> 3.4.5. Adding an image to a email theme To make images available to the theme add them to the <THEME TYPE>/email/resources/img directory of your theme. These can be used from within directly in HTML templates. For example to add an image to the mytheme copy an image to themes/mytheme/email/resources/img/logo.jpg . To use directly in HTML templates add the following to a custom HTML template: <img src="USD{url.resourcesUrl}/img/image.jpg" alt="My image description"> 3.4.6. Messages Text in the templates is loaded from message bundles. A theme that extends another theme will inherit all messages from the parent's message bundle and you can override individual messages by adding <THEME TYPE>/messages/messages_en.properties to your theme. For example to replace Username on the login form with Your Username for the mytheme create the file themes/mytheme/login/messages/messages_en.properties with the following content: Within a message values like {0} and {1} are replaced with arguments when the message is used. For example {0} in Log in to {0} is replaced with the name of the realm. Texts of these message bundles can be overwritten by realm-specific values. The realm-specific values are manageable via UI and API. 3.4.7. Adding a language to a realm Prerequisites To enable internationalization for a realm, see the Server Administration Guide . Procedure Create the file <THEME TYPE>/messages/messages_<LOCALE>.properties in the directory of your theme. Add this file to the locales property in <THEME TYPE>/theme.properties . For a language to be available to users the realms login , account and email , the theme has to support the language, so you need to add your language for those theme types. For example, to add Norwegian translations to the mytheme theme create the file themes/mytheme/login/messages/messages_no.properties with the following content: If you omit a translation for messages, they will use English. Edit themes/mytheme/login/theme.properties and add: Add the same for the account and email theme types. To do this create themes/mytheme/account/messages/messages_no.properties and themes/mytheme/email/messages/messages_no.properties . Leaving these files empty will result in the English messages being used. Copy themes/mytheme/login/theme.properties to themes/mytheme/account/theme.properties and themes/mytheme/email/theme.properties . Add a translation for the language selector. This is done by adding a message to the English translation. To do this add the following to themes/mytheme/account/messages/messages_en.properties and themes/mytheme/login/messages/messages_en.properties : By default, message properties files should be encoded using UTF-8. Keycloak falls back to ISO-8859-1 handling if it can't read the contents as UTF-8. Unicode characters can be escaped as described in Java's documentation for PropertyResourceBundle . versions of Keycloak supported specifying the encoding in the first line with a comment like # encoding: UTF-8 , which is no longer supported. Additional resources See Locale Selector for details on how the current locale is selected. 3.4.8. Adding custom Identity Providers icons Red Hat build of Keycloak supports adding icons for custom Identity providers, which are displayed on the login screen. Procedure Define icon classes in your login theme.properties file (for example, themes/mytheme/login/theme.properties ) with key pattern kcLogoIdP-<alias> . For an Identity Provider with an alias myProvider , you may add a line to theme.properties file of your custom theme. For example: All icons are available on the official website of PatternFly4. Icons for social providers are already defined in base login theme properties ( themes/keycloak/login/theme.properties ), where you can inspire yourself. 3.4.9. Creating a custom HTML template Red Hat build of Keycloak uses Apache Freemarker templates to generate HTML and render pages. Although it is possible to create custom templates to change completely how pages are rendered, the recommendation is to leverage the built-in templates as much as possible. The reasons are: During upgrades, you might be forced to update your custom templates to get the latest updates from newer versions Configuring CSS styles to your themes allows you to adapt the UI to match your UI design standards and guidelines. User Profile allows you to support custom user attributes and configure how they are rendered. In most cases, you won't need to change templates to adapt Red Hat build of Keycloak to your needs, but you can override individual templates in your own theme by creating <THEME TYPE>/<TEMPLATE>.ftl . Admin and account console use a single template index.ftl for rendering the application. For a list of templates in other theme types look at the theme/base/<THEME_TYPE> directory in the JAR file at USDKEYCLOAK_HOME/lib/lib/main/org.keycloak.keycloak-themes-<VERSION>.jar . Procedure Copy the template from the base theme to your own theme. Apply the modifications you need. For example, to create a custom login form for the mytheme theme, copy themes/base/login/login.ftl to themes/mytheme/login and open it in an editor. After the first line (<#import ... >), add <h1>HELLO WORLD!</h1> as shown here: <#import "template.ftl" as layout> <h1>HELLO WORLD!</h1> ... Back up the modified template. When upgrading to a new version of Red Hat build of Keycloak you may need to update your custom templates to apply changes to the original template if applicable. Additional resources See the FreeMarker Manual for details on how to edit templates. 3.4.10. Emails To edit the subject and contents for emails, for example password recovery email, add a message bundle to the email type of your theme. There are three messages for each email. One for the subject, one for the plain text body and one for the html body. To see all emails available take a look at themes/base/email/messages/messages_en.properties . For example to change the password recovery email for the mytheme theme create themes/mytheme/email/messages/messages_en.properties with the following content: 3.5. Deploying themes Themes can be deployed to Red Hat build of Keycloak by copying the theme directory to themes or it can be deployed as an archive. During development you can copy the theme to the themes directory, but in production you may want to consider using an archive . An archive makes it simpler to have a versioned copy of the theme, especially when you have multiple instances of Red Hat build of Keycloak for example with clustering. Procedure To deploy a theme as an archive, create a JAR archive with the theme resources. Add a file META-INF/keycloak-themes.json to the archive that lists the available themes in the archive as well as what types each theme provides. For example for the mytheme theme create mytheme.jar with the contents: META-INF/keycloak-themes.json theme/mytheme/login/theme.properties theme/mytheme/login/login.ftl theme/mytheme/login/resources/css/styles.css theme/mytheme/login/resources/img/image.png theme/mytheme/login/messages/messages_en.properties theme/mytheme/email/messages/messages_en.properties The contents of META-INF/keycloak-themes.json in this case would be: { "themes": [{ "name" : "mytheme", "types": [ "login", "email" ] }] } A single archive can contain multiple themes and each theme can support one or more types. To deploy the archive to Red Hat build of Keycloak, add it to the providers/ directory of Red Hat build of Keycloak and restart the server if it is already running. 3.6. Additional resources for Themes For the inspiration, see Default Themes bundled inside Red Hat build of Keycloak. Red Hat build of Keycloak Quickstarts Repository - Directory extension of the quickstarts repository contains some theme examples, which can be also used as an inspiration. === Theme selector By default the theme configured for the realm is used, with the exception of clients being able to override the login theme. This behavior can be changed through the Theme Selector SPI. This could be used to select different themes for desktop and mobile devices by looking at the user agent header, for example. To create a custom theme selector you need to implement ThemeSelectorProviderFactory and ThemeSelectorProvider . 3.7. Theme resources When implementing custom providers in Red Hat build of Keycloak there may often be a need to add additional templates, resources and messages bundles. The easiest way to load additional theme resources is to create a JAR with templates in theme-resources/templates resources in theme-resources/resources and messages bundles in theme-resources/messages . If you want a more flexible way to load templates and resources that can be achieved through the ThemeResourceSPI. By implementing ThemeResourceProviderFactory and ThemeResourceProvider you can decide exactly how to load templates and resources. 3.8. Locale selector By default, the locale is selected using the DefaultLocaleSelectorProvider which implements the LocaleSelectorProvider interface. English is the default language when internationalization is disabled. With internationalization enabled, the locale is resolved according to the logic described in the Server Administration Guide . This behavior can be changed through the LocaleSelectorSPI by implementing the LocaleSelectorProvider and LocaleSelectorProviderFactory . The LocaleSelectorProvider interface has a single method, resolveLocale , which must return a locale given a RealmModel and a nullable UserModel . The actual request is available from the KeycloakSession#getContext method. Custom implementations can extend the DefaultLocaleSelectorProvider in order to reuse parts of the default behavior. For example to ignore the Accept-Language request header, a custom implementation could extend the default provider, override it's getAcceptLanguageHeaderLocale , and return a null value. As a result the locale selection will fall back on the realm's default language. 3.9. Additional resources for Locale selector For more details on creating and deploying a custom provider, see Service Provider Interfaces .
[ "bin/kc.[sh|bat] start --spi-theme-welcome-theme=custom-theme", "bin/kc.[sh|bat] start --spi-theme-static-max-age=-1 --spi-theme-cache-themes=false --spi-theme-cache-templates=false", "parent=base import=common/keycloak", "javaVersion=USD{java.version} unixHome=USD{env.HOME:Unix home not found} windowsHome=USD{env.HOMEPATH:Windows home not found}", ".login-pf body { background: DimGrey none; }", "styles=css/styles.css", "styles=css/login.css css/styles.css", "alert('Hello');", "scripts=js/script.js", "body { background-image: url('../img/image.jpg'); background-size: cover; }", "<img src=\"USD{url.resourcesPath}/img/image.jpg\" alt=\"My image description\">", "<img src=\"USD{url.resourcesUrl}/img/image.jpg\" alt=\"My image description\">", "usernameOrEmail=Your Username", "usernameOrEmail=Brukernavn password=Passord", "locales=en,no", "locale_no=Norsk", "kcLogoIdP-myProvider = fa fa-lock", "<#import \"template.ftl\" as layout> <h1>HELLO WORLD!</h1>", "passwordResetSubject=My password recovery passwordResetBody=Reset password link: {0} passwordResetBodyHtml=<a href=\"{0}\">Reset password</a>", "{ \"themes\": [{ \"name\" : \"mytheme\", \"types\": [ \"login\", \"email\" ] }] }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_developer_guide/themes
33.7.3. The Access Control Tab
33.7.3. The Access Control Tab You can change user-level access to the configured printer by clicking the Access Control tab. Add users using the text box and click the Add button beside it. You can then choose to only allow use of the printer to that subset of users or deny use to those users. Figure 33.10. Access Control Tab
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/ch33s07s03
Chapter 9. Configuring the audit log policy
Chapter 9. Configuring the audit log policy You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use. 9.1. About audit log policy profiles Audit log profiles define how to log requests that come to the OpenShift API server, the Kubernetes API server, and the OAuth API server. OpenShift Container Platform provides the following predefined audit policy profiles: Profile Description Default Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. WriteRequestBodies In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( create , update , patch ). This profile has more resource overhead than the Default profile. [1] AllRequestBodies In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( get , list , create , update , patch ). This profile has the most resource overhead. [1] None No requests are logged; even OAuth access token requests and OAuth authorize token requests are not logged. Custom rules are ignored when this profile is set. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Sensitive resources, such as Secret , Route , and OAuthClient objects, are never logged past the metadata level. By default, OpenShift Container Platform uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage (CPU, memory, and I/O). 9.2. Configuring the audit log policy You can configure the audit log policy to use when logging requests that come to the API servers. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Update the spec.audit.profile field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: WriteRequestBodies 1 1 Set to Default , WriteRequestBodies , AllRequestBodies , or None . The default profile is Default . Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 9.3. Configuring the audit log policy with custom rules You can configure an audit log policy that defines custom rules. You can specify multiple groups and define which profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. Important Custom rules are ignored if the top-level profile field is set to None . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Add the spec.audit.customRules field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2 1 Add one or more groups and specify the profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. 2 Set to Default , WriteRequestBodies , or AllRequestBodies . If you do not set this top-level profile field, it defaults to the Default profile. Warning Do not set the top-level profile field to None if you want to use custom rules. Custom rules are ignored if the top-level profile field is set to None . Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 9.4. Disabling audit logging You can disable audit logging for OpenShift Container Platform. When you disable audit logging, even OAuth access token requests and OAuth authorize token requests are not logged. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Set the spec.audit.profile field to None : apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: None Note You can also disable audit logging only for specific groups by specifying custom rules in the spec.audit.customRules field. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12
[ "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/audit-log-policy-config
Chapter 15. Managing storage volumes in GNOME
Chapter 15. Managing storage volumes in GNOME This section describes how you can manage storage volumes in GNOME with a virtual file system. GNOME Virtual File System (GVFS) is an extension of the virtual file system interface provided by the libraries the GNOME desktop is built on. 15.1. The GVFS system GVFS provides complete virtual file system infrastructure and handles storage in the GNOME desktop. It uses addresses for full identification based on the Uniform Resource Identifier (URI) standard, syntactically similar to URL addresses in web browsers. These addresses in the form of schema :// user @ server/path are the key information determining the kind of service. GVFS helps to mount the resources. These mounts are shared between multiple applications. Resources are tracked globally within the running desktop session, which means that even if you quit an application that triggered the mount, the mount continues to be available for any other application. Multiple applications can access the mount at the same time unless it is limited by a back end. Some protocols by design permit only a single channel. GVFS mounts removable media in the /run/media/ directory. 15.2. The format of the GVFS URI string You must form a URI string to use back end services. This string is a basic identifier used in GVFS, which carries all necessary information needed for unique identification, such as type of service, back end ID, absolute path, or user name if required. You can see this information in the Files address bar and GTK+ open or save file dialog. The following example is a very basic form of the URI string, which points to a root directory ( / ) of the File Transfer Protocol (FTP) server running at the ftp.myserver.net domain: Example 15.1. A URI string pointing to the root FTP directory Example 15.2. A URI string pointing to a text file on FTP 15.3. Mounting a storage volume in GNOME You can manually mount a local storage volume or a network share in the Files application. Procedure Open the Files application. Click Other Locations in the side bar. The window lists all connected storage volumes and all network shares that are publicly available on your local area network. If you can see the volume or network share in this list, mount it by clicking the item. If you want to connect to a different network share, use the following steps. Enter the GVFS URI string to the network share in the Connect to Server field. Press Connect . If the dialog asks you for login credentials, enter your name and password into the relevant fields. When the mounting process finishes, you can browse the files on the volume or network share. 15.4. Unmounting a storage volume in GNOME You can unmount a storage volume, a network share, or another resource in the Files application. Procedure Open the Files application. In the side bar, click the Unmount (⏏) icon to the chosen mount. Wait until the mount disappears from the side bar or a notification about the safe removal appears. 15.5. Access to GVFS mounts in the file system Learn more about FUSE, the main daemon for the GVFS virtual file system. Applications built with the GIO library can access GVFS mounts. In addition, GVFS provides a FUSE daemon which exposes active GVFS mounts. Any application can access active GVFS mounts using the standard POSIX APIs as though mounts were regular file systems. In certain applications, additional library dependency and new virtual file system (VFS) subsystem specifics might be unsuitable or too complicated. For such reasons and to boost compatibility, GVFS provides a File System in Userspace (FUSE) daemon, which exposes active mounts through its mount for standard Portable Operating System Interface (POSIX) access. This daemon transparently translates incoming requests to imitate a local file system for applications. Important You might experience difficulties with certain combinations of applications and GVFS back ends. The FUSE daemon starts automatically with the main gvfs daemon and mounts volumes either in the /run/user/ UID /gvfs/ or ~/.gvfs/ directories as a fallback. Manual browsing shows individual directories for each GVFS mount. The system passes the transformed path as an argument when you are opening documents from GVFS locations with non-native applications. Note that native GIO applications automatically translate this path back to a native URI. 15.6. Available GIO commands GIO provides several commands that might be useful for scripting or testing. Here is a set of POSIX commands counterparts as follows: Command Description gio cat Displays the content of a file. gio mkdir Creates a new directory. gio rename Renames a file. gio mount Provides access to various aspects of the gio mounting functionality. gio set Sets a file attribute on a file. gio copy Makes a copy of a file. gio list Lists directory contents. gio move Moves a file from one location to another. gio remove Removes a file. gio trash Sends files or directories to the Trashcan . This can be a different folder depending on where the file is located, and not all file systems support this concept. In the common case that the file lives inside a user's home directory, the trash folder is USDXDG_DATA_HOME/Trash . gio info Displays information of the given locations. gio save Reads from standard input and saves the data to the given location. gio tree Lists the contents of the given locations recursively, in a tree-like format. If no location is given, it defaults to the current directory. Following additional commands provide more control of GIO specifics: Command Description gio monitor Monitors files or directories for changes, such as creation, deletion, content and attribute changes, and mount and unmount operations affecting the monitored locations. gio mime Lists the registered and recommended applications for the mimetype if no handler is given, else, it is set as the default handler for the mimetype. gio open Opens files with the default application that is registered to handle files of this type. Note For user convenience, bash completion is provided as a part of the package. All these commands are native GIO clients, there is no need for the fallback FUSE daemon to be running. Their purpose is not to be drop-in replacements for POSIX commands, in fact, a very little range of switches is supported. In their basic form, these commands take an URI string as an argument instead of a local path. Additional resources gio(1) man page on your system 15.7. Sample GIO commands The following section provides a few examples of the GIO commands usage. Example 15.3. List all files in the local /tmp directory Example 15.4. List the content of a text file from a remote system Example 15.5. Copy the text file to a local /tmp directory Additional resources gio man page on your system 15.8. Overview of GVFS metadata The GVFS metadata storage is implemented as a set of key-and-value pairs that bind information to a particular file. Thus, there is a tool for a user or application to save small data designed for runtime information such as icon position, last-played location, position in a document, emblems, notes, and so on. Whenever you move a file or directory, GVFS moves the metadata accordingly so that metadata stays connected to the respective file. The GVFS stores all metadata privately, so metadata is available only on the machine. However, GVFS tracks mounts and removable media as well. Note GVFS mounts removable media in the /run/media/ directory. To view and manipulate with metadata, you can use: the gio info command, the gio set command, or any other native GIO way of working with attributes. Additional resources gio man page on your system 15.9. Setting custom GIO metadata attribute This procedure describes how to set a custom metadata attribute. Notice the differences between particular gio info calls and data persistence after a move or rename. Note the gio info command output. Procedure Create an empty file: View the metadata of this file: Set a string to this file: View the metadata: Move this file to a new location: View the metadata: The metadata persists when you move the file using the GIO API. Additional resources gio man page on your system 15.10. Password management of GVFS mounts Learn more about the GVFS mount authentication. A typical GVFS mount authenticates on its activation unless the resource allows anonymous authentication or does not require any authentication at all. In a standard GTK+ dialog, you can choose whether to store the password. When you select the persistent storage, the password is stored in the user keyring. GNOME Keyring is a central place for secrets storage. The password is encrypted and automatically unlocked on desktop session start using the password provided on login. For protecting it by a different password, you can set the password at the first use. The Passwords and Keys application helps to manage the stored password and GNOME Keyring . It allows removing individual records or changing passwords. 15.11. GVFS back ends Back ends in GVFS provide access to a specific type of resource. This section provides a list of available GVFS back ends and their specifications. Note Some back ends are packaged separately and not installed by default. For installing additional back ends, use the yum package manager. Table 15.1. Available back ends Back end Description afc Similar to Media Transfer Protocol (MTP), exposes files on your Apple iDevice connected through USB. afp An Apple Filing Protocol (AFP) client to access file services of macOS and the original Mac operation system. archive Handles various archiving files (ZIP, TAR) in read-only way. admin Provides administrator access to the local file system. burn A virtual back end that burning applications use as a temporary storage for new CD, DVD, or BD medium content. cdda Exposes Audio CD through separate Waveform Audio File Format (WAV) files. computer A virtual back end consolidating active mounts and physical volumes. Acts similarly to a signpost. Previously used by Files for its Computer view. dav , davs A WebDAV client, including secure variant. Authentication is possible only during mount. The back end does not support later re-authentication on per-folder basis. dns-sd DNS Service Discovery: An Avahi client, used during network browsing, forms persistent URIs to discovered services. ftp A fully featured File Transfer Protocol (FTP) client. Supports passive transfers by default. Also, handles secure mode over ftps (explicit mode) and ftpis (implicit mode) schemes. gphoto2 A Picture Transfer Protocol (PTP) client to access your camera attached by USB or FireWire. google Provides access to Google Drive. The Google Drive account needs to be configured in the Online Accounts settings. http Handles all HTTP requests. Useful for easy downloading files from web in client applications. locatest A simple testing back end that proxies the file:// URI. The back end supports error injection. mtp A Media Transfer Protocol (MTP) back end for accessing media player and smart phone memory. network Allows you to browse Window Network and show shares discovered over Avahi. recent A back end used in the file chooser dialog to list recent files used by GNOME applications. sftp A fully-featured SSH File Transfer Protocol (SFTP) client. smb Accesses Samba and Windows shares. trash A trash back end that allows to restore deleted files.
[ "ftp://ftp.myserver.net/", "ssh://[email protected]/home/joe/todo.txt", "gio list file:///tmp", "gio cat ssh://[email protected]/home/joe/todo.txt", "gio copy ssh://[email protected]/home/joe/todo.txt /tmp/", "touch /tmp/myfile", "gio info -a 'metadata::*' /tmp/myfile uri: file:///tmp/myfile attributes:", "gio set -t string /tmp/myfile 'metadata::mynote' 'Please remember to delete this file!'", "gio info -a 'metadata::*' /tmp/myfile uri: file:///tmp/myfile attributes: metadata::mynote: Please remember to delete this file!", "gio move /tmp/myfile /tmp/newfile", "gio info -a 'metadata::*' /tmp/newfile uri: file:///tmp/newfile attributes: metadata::mynote: Please remember to delete this file!" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8
Chapter 3. Connectivity Link technologies and patterns
Chapter 3. Connectivity Link technologies and patterns The main technologies and patterns provided by Connectivity Link include the following: Gateway API Gateways play an essential role in application connectivity and security. In Kubernetes-based environments, Gateway API is the new standard for deploying ingress Gateways and managing application networking. Gateway API provides standardized APIs for ingress traffic management and support for multiple protocols. Gateway API is user persona role-oriented by design, and provides configuration flexibility and portability. You can use Gateway API to set up ingress policies on each OpenShift cluster to be identical, consistent, and implemented with minimum effort. Figure 3.1. Gateway API user persona-based design Policy-based configuration By using Connectivity Link policies defined as Kubernetes custom resource definitions (CRDs), platform engineers and application developers can easily secure, protect, and connect their applications and infrastructure. Connectivity Link provides policies for managing TLS, authentication and authorization, rate limiting, and DNS. The policy attachment pattern provides a way to add behavior to an network object by using configuration that cannot be described in the object spec field. Policy attachments also provide the concept of defaults and overrides, which allow different roles to operate with policy APIs at different levels of the object hierarchy. These policies are then merged with specific rules and strategies to form an effective policy. The following simple example of a rate limiting policy configures a specified limit of 5 requests per 10 seconds for every listener defined in the target Gateway that does not have its own rate limiting policy defined: Simple rate limiting policy example apiVersion: kuadrant.io/v1beta2 kind: RateLimitPolicy metadata: name: gw-rlp spec: targetRef: # Specifies Gateway API policy attachment group: gateway.networking.k8s.io kind: Gateway name: external defaults: # Means it can be overridden limits: # Limitador component configuration "global": rates: - limit: 5 window: 10s WebAssembly plug-in Unlike other connectivity management systems, Connectivity Link is not a standalone Gateway. Connectivity Link is a WebAssembly (WASM) plug-in, which is developed for the Envoy proxy. This means that users of OpenShift Service Mesh, Istio, or Envoy for ingress do not require major changes to their existing ingress objects and policies to begin using Connectivity Link. The WebAssembly plug-in design also means that Connectivity Link is lightweight, fast, hardware independent, non-intrusive, and secure. Multicluster configuration mirroring Connectivity Link uses multicluster configuration mirroring across multicloud and hybrid cloud environments to ensure that you can bring your routing, configuration, and policies wherever they are required. You must no longer set different policies in different ways based on the cloud service provider, and you can bring your own policies instead. You can also ensure that your development, test, and production environments are set up in the same way to prevent surprises later. In this way, Connectivity Link provides consistency, simplicity, unified experience, global administration, and security compliance. Figure 3.2. Multicluster configuration mirroring across multicloud and hybrid cloud environments API controller 1.0 Developer Preview Connectivity Link introduces an API controller 1.0 Developer Preview component that provides a -generation approach to API management, and which extends beyond traditional API management capabilities provided by other products. API management requires connectivity, and Connectivity Link provides scalable multicluster and multi-Gateway connectivity management, along with API management features such as API design, API registry, API observability, authentication, and rate limiting. For more details, see the Getting Started with API controller 1.0 Developer Preview . Important Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. For more information, see Red Hat Developer Preview - Scope of Support . Figure 3.3. Connectivity Link API management and connectivity Additional resources Solution Patterns: Connect, Secure, and Protect with Red Hat Connectivity Link .
[ "apiVersion: kuadrant.io/v1beta2 kind: RateLimitPolicy metadata: name: gw-rlp spec: targetRef: # Specifies Gateway API policy attachment group: gateway.networking.k8s.io kind: Gateway name: external defaults: # Means it can be overridden limits: # Limitador component configuration \"global\": rates: - limit: 5 window: 10s" ]
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/introduction_to_connectivity_link/connectivity-link-patterns_rhcl
Chapter 27. Use Red Hat JBoss Data Grid with Amazon Web Services
Chapter 27. Use Red Hat JBoss Data Grid with Amazon Web Services 27.1. The S3_PING JGroups Discovery Protocol S3_PING is a discovery protocol that is ideal for use with Amazon's Elastic Compute Cloud (EC2) because EC2 does not allow multicast and therefore MPING is not allowed. Each EC2 instance adds a small file to an S3 data container, known as a bucket. Each instance then reads the files in the bucket to discover the other members of the cluster. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-use_red_hat_data_grid_with_amazon_web_services
Chapter 5. Changes in Rust 1.75.0 Toolset
Chapter 5. Changes in Rust 1.75.0 Toolset Rust Toolset has been updated from version 1.71.1 to 1.75.0 on RHEL 8 and RHEL 9. Notable changes include: Constant evaluation time is now unlimited Cleaner panic messages Cargo registry authentication async fn and opaque return types in traits For detailed information regarding the updates, see the series of upstream release announcements: Announcing Rust 1.72.0 . Announcing Rust 1.73.0 . Announcing Rust 1.74.0 . Announcing Rust 1.75.0 .
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.75.0_toolset/assembly_changes-in-rust-toolset
Appendix A. Automation execution environments precedence
Appendix A. Automation execution environments precedence Project updates will always use the control plane automation execution environments by default, however, jobs will use the first available automation execution environments as follows: The execution_environment defined on the template (job template or inventory source) that created the job. The default_environment defined on the project that the job uses. The default_environment defined on the organization of the job. The default_environment defined on the organization of the inventory the job uses. The current DEFAULT_EXECUTION_ENVIRONMENT setting (configurable at api/v2/settings/jobs/ ) Any image from the GLOBAL_JOB_EXECUTION_ENVIRONMENTS setting. Any other global execution environment. Note If more than one execution environment fits a criteria (applies for 6 and 7), the most recently created one is used.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/creating_and_consuming_execution_environments/con-ee-precedence
Chapter 50. JSLT
Chapter 50. JSLT Since Camel 3.1 Only producer is supported The JSLT component allows you to process a JSON messages using an JSLT expression. This can be ideal when doing JSON to JSON transformation or querying data. 50.1. Dependencies When using jslt with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency> 50.2. URI format Where specName is the classpath-local URI of the specification to invoke; or the complete URL of the remote specification (eg: file://folder/myfile.vm ). 50.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 50.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 50.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 50.4. Component Options The JSLT component supports 5 options, which are listed below. Name Description Default Type allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean functions (advanced) JSLT can be extended by plugging in functions written in Java. Collection objectFilter (advanced) JSLT can be extended by plugging in a custom jslt object filter. JsonFilter 50.4.1. Endpoint Options The JSLT endpoint is configured using URI syntax: with the following path and query parameters: 50.4.1.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 50.4.1.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean mapBigDecimalAsFloats (producer) If true, the mapper will use the USE_BIG_DECIMAL_FOR_FLOATS in serialization features. false boolean objectMapper (producer) Setting a custom JSON Object Mapper to be used. ObjectMapper prettyPrint (common) If true, JSON in output message is pretty printed. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 50.5. Message Headers The JSLT component supports 2 message header(s), which is/are listed below: Name Description Default Type CamelJsltString (producer) Constant: HEADER_JSLT_STRING The JSLT Template as String. String CamelJsltResourceUri (producer) Constant: HEADER_JSLT_RESOURCE_URI The resource URI. String 50.6. Passing values to JSLT Camel can supply exchange information as variables when applying a JSLT expression on the body. The available variables from the Exchange are: name value headers The headers of the In message as a json object exchange.properties The Exchange properties as a json object. exchange is the name of the variable and properties is the path to the exchange properties. Available if allowContextMapAll option is true. All the values that cannot be converted to json with Jackson are denied and will not be available in the jslt expression. For example, the header named "type" and the exchange property "instance" can be accessed like { "type": USDheaders.type, "instance": USDexchange.properties.instance } 50.7. Samples The sample example is as given below. from("activemq:My.Queue"). to("jslt:com/acme/MyResponse.json"); And a file based resource: from("activemq:My.Queue"). to("jslt:file://myfolder/MyResponse.json?contentCache=true"). to("activemq:Another.Queue"); You can also specify which JSLT expression the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelJsltResourceUri").constant("path/to/my/spec.json"). to("jslt:dummy?allowTemplateFromHeader=true"); Or send whole jslt expression via header: (suitable for querying) from("direct:in"). setHeader("CamelJsltString").constant(".published"). to("jslt:dummy?allowTemplateFromHeader=true"); Passing exchange properties to the jslt expression can be done like this from("direct:in"). to("jslt:com/acme/MyResponse.json?allowContextMapAll=true"); 50.8. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.jslt.allow-template-from-header Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false Boolean camel.component.jslt.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jslt.enabled Whether to enable auto configuration of the jslt component. This is enabled by default. Boolean camel.component.jslt.functions JSLT can be extended by plugging in functions written in Java. Collection camel.component.jslt.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jslt.object-filter JSLT can be extended by plugging in a custom jslt object filter. The option is a com.schibsted.spt.data.jslt.filters.JsonFilter type. JsonFilter
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency>", "jslt:specName[?options]", "jslt:resourceUri", "{ \"type\": USDheaders.type, \"instance\": USDexchange.properties.instance }", "from(\"activemq:My.Queue\"). to(\"jslt:com/acme/MyResponse.json\");", "from(\"activemq:My.Queue\"). to(\"jslt:file://myfolder/MyResponse.json?contentCache=true\"). to(\"activemq:Another.Queue\");", "from(\"direct:in\"). setHeader(\"CamelJsltResourceUri\").constant(\"path/to/my/spec.json\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");", "from(\"direct:in\"). setHeader(\"CamelJsltString\").constant(\".published\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");", "from(\"direct:in\"). to(\"jslt:com/acme/MyResponse.json?allowContextMapAll=true\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jslt-component-starter
Chapter 11. Post-deployment operations to manage the Red Hat Ceph Storage cluster
Chapter 11. Post-deployment operations to manage the Red Hat Ceph Storage cluster After you deploy your Red Hat OpenStack Platform (RHOSP) environment with containerized Red Hat Ceph Storage, there are some operations you can use to manage the Ceph Storage cluster. 11.1. Disabling configuration overrides After the Ceph Storage cluster is initially deployed, the cluster is configured to allow the setup of services such as RGW during the overcloud deployment. Once overcloud deployment is complete, director should not be used to make changes to the cluster configuration unless you are scaling up the cluster. Cluster configuration changes should be performed using Ceph commands. Procedure Log in to the undercloud node as the stack user. Open the file deployed_ceph.yaml or the file you use in your environment to define the Ceph Storage cluster configuration. Locate the ApplyCephConfigOverridesOnUpdate parameter. Change the ApplyCephConfigOverridesOnUpdate parameter value to false . Save the file. Additional resources For more information on the ApplyCephConfigOverridesOnUpdate and CephConfigOverrides parameters, see Overcloud parameters . 11.2. Accessing the overcloud Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc , in the home directory of the stack user. Procedure Run the following command to source the file: This loads the necessary environment variables to interact with your overcloud from the undercloud CLI. To return to interacting with the undercloud, run the following command: 11.3. Monitoring Red Hat Ceph Storage nodes After you create the overcloud, check the status of the Ceph cluster to confirm that it works correctly. Procedure Log in to a Controller node as the tripleo-admin user: Check the health of the Ceph cluster: If the Ceph cluster has no issues, the command reports back HEALTH_OK . This means the Ceph cluster is safe to use. Log in to an overcloud node that runs the Ceph monitor service and check the status of all OSDs in the Ceph cluster: Check the status of the Ceph Monitor quorum: This shows the monitors participating in the quorum and which one is the leader. Verify that all Ceph OSDs are running: For more information on monitoring Ceph clusters, see Monitoring a Ceph Storage cluster in the Red Hat Ceph Storage Administration Guide . 11.4. Mapping a Block Storage (cinder) type to your new Ceph pool After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the fastpool tier that you created. Procedure Log in to the undercloud node as the stack user. Source the overcloudrc file: Check the Block Storage volume existing types: Create the new Block Storage volume fast_tier: Check that the Block Storage type is created: When the fast_tier Block Storage type is available, set the fastpool as the Block Storage volume back end for the new tier that you created: Use the new tier to create new volumes: Note The Red Hat Ceph Storage documentation provides additional information and procedures for the ongoing maintenance and operation of the Ceph Storage cluster. See Product Documentation for Red Hat Ceph Storage .
[ "source ~/overcloudrc", "source ~/stackrc", "nova list ssh [email protected]", "sudo cephadm shell -- ceph health", "sudo cephadm shell -- ceph osd tree", "sudo cephadm shell -- ceph quorum_status", "sudo cephadm shell -- ceph osd stat", "source overcloudrc", "cinder type-list", "cinder type-create fast_tier", "cinder type-list", "cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool", "cinder create 1 --volume-type fast_tier --name fastdisk" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_post-deployment-operations-to-manage-the-ceph-storage-cluster
Chapter 1. Red Hat Insights compliance service overview
Chapter 1. Red Hat Insights compliance service overview The Red Hat Insights for Red Hat Enterprise Linux compliance service enables IT security and compliance administrators to assess, monitor, and report on the security-policy compliance of RHEL systems. The compliance service provides a simple but powerful user interface, enabling the creation, configuration, and management of SCAP security policies. With the filtering and context-adding features built in, IT security administrators can easily identify and manage security compliance issues in the RHEL infrastructure. This documentation describes some of the functionality of the compliance service, to help users understand reporting, manage issues, and get the maximum value from the service. You can also create Ansible Playbooks to resolve security compliance issues and share reports with stakeholders to communicate compliance status. Additional Resources Generating Compliance Service Reports with FedRAMP 1.1. Requirements and prerequisites The compliance service is part of Red Hat Insights for Red Hat Enterprise Linux, which is included with your Red Hat Enterprise Linux (RHEL) subscription and can be used with all versions of RHEL currently supported by Red Hat. You do not need additional Red Hat subscriptions to use Insights for Red Hat Enterprise Linux and the compliance service. 1.2. Supported configurations Red Hat supports specific versions of the SCAP Security Guide (SSG) for each minor version of Red Hat Enterprise Linux (RHEL). The rules and policies in an SSG version are only accurate for one RHEL minor version. In order to receive accurate compliance reporting, the system must have the supported SSG version installed. Red Hat Enterprise Linux minor versions ship and upgrade with the supported SSG version included. However, some organizations may decide to continue using an earlier version temporarily, prior to upgrading. If a policy includes systems using unsupported SSG versions, an unsupported warning, preceded by the number of affected systems, is visible to the policy in Security > Compliance > Reports . Note For more information about which versions of the SCAP Security Guide are supported in RHEL, refer to Insights Compliance - Supported configurations . Example of a compliance policy with a system running an unsupported version of SSG 1.2.1. Frequently asked questions about the compliance service How do I interpret the SSG package name? Packages names look like this: scap-security-guide-0.1.43-13.el7 . The SSG version in this case is 0.1.43; the release is 13 and architecture is el7. The release number can differ from the version number shown in the table; however, the version number must match as indicated below for it to be a supported configuration. What if Red Hat supports more than one SSG for my RHEL minor version? When more than one SSG version is supported for a RHELminor version, as is the case with RHEL 7.9 and RHEL 8.1, the compliance service will use the latest available version. Why is my old policy no longer supported by SSG? As RHEL minor versions get older, fewer SCAP profiles are supported. To view which SCAP profiles are supported, refer to Insights Compliance - Supported configurations . More about limitations of unsupported configurations The following conditions apply to the results for unsupported configurations: These results are a "best-guess" effort because using any SSG version other than what is supported by Red Hat can lead to inaccurate results. Important Although you can still see results for a system with an unsupported version of SSG installed, those results may be considered inaccurate for compliance reporting purposes. Results for systems using an unsupported version of SSG are not included in the overall compliance assessment for the policy. Remediations are not available for rules on systems with an unsupported version of SSG installed. 1.3. Best practices To benefit from the best user experience and receive the most accurate results in the compliance service, Red Hat recommends that you follow some best practices. Ensure that the RHEL OS system minor version is visible to the Insights client If the compliance service cannot see your RHEL OS minor version, then the supported SCAP Security Guide version cannot be validated and your reporting may not be accurate. The Insights client allows users to redact certain data, including Red Hat Enterprise Linux OS minor version, from the data payload that is uploaded to Red Hat Insights for Red Hat Enterprise Linux. This will prohibit accurate compliance service reporting. To learn more about data redaction, see the following documentation: Red Hat Insights client data redaction . Create security policies within the compliance service Creating your organization's security policies within the compliance service allows you to: Associate many systems with the policy. Use the supported SCAP Security Guide for your RHEL minor version. Edit which rules are included, based on your organization's requirements. 1.4. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. All users on your account have access to most of the data in Insights for Red Hat Enterprise Linux. 1.4.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.4.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.4.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. 1.4.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. Additional resources For more information about user access and permissions, see User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP . 1.4.3. User Access roles for compliance-service users The following roles enable standard or enhanced access to compliance features in Insights for Red Hat Enterprise Linux: Compliance viewer. A compliance-service role that grants read access to any compliance resource. Compliance administrator. A compliance-service role that grants full access to any compliance resource. If a procedure requires that you be granted the Compliance administrator role or other enhanced permissions, it will be noted in the Prerequisites for that procedure.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems_with_fedramp/intro-compliance
10.2. Setting up a Cache
10.2. Setting up a Cache Currently, Red Hat Enterprise Linux 7 only provides the cachefiles caching back end. The cachefilesd daemon initiates and manages cachefiles . The /etc/cachefilesd.conf file controls how cachefiles provides caching services. The first setting to configure in a cache back end is which directory to use as a cache. To configure this, use the following parameter: Typically, the cache back end directory is set in /etc/cachefilesd.conf as /var/cache/fscache , as in: If you want to change the cache back end directory, the selinux context must be same as /var/cache/fscache : Replace /path/to/cache with the directory name while setting up cache. Note If the given commands for setting selinux context did not work, use the following commands: FS-Cache will store the cache in the file system that hosts /path/to/cache . On a laptop, it is advisable to use the root file system ( / ) as the host file system, but for a desktop machine it would be more prudent to mount a disk partition specifically for the cache. File systems that support functionalities required by FS-Cache cache back end include the Red Hat Enterprise Linux 7 implementations of the following file systems: ext3 (with extended attributes enabled) ext4 Btrfs XFS The host file system must support user-defined extended attributes; FS-Cache uses these attributes to store coherency maintenance information. To enable user-defined extended attributes for ext3 file systems (i.e. device ), use: Alternatively, extended attributes for a file system can be enabled at mount time, as in: The cache back end works by maintaining a certain amount of free space on the partition hosting the cache. It grows and shrinks the cache in response to other elements of the system using up free space, making it safe to use on the root file system (for example, on a laptop). FS-Cache sets defaults on this behavior, which can be configured via cache cull limits . For more information about configuring cache cull limits, refer to Section 10.4, "Setting Cache Cull Limits" . Once the configuration file is in place, start up the cachefilesd service: To configure cachefilesd to start at boot time, execute the following command as root:
[ "dir /path/to/cache", "dir /var/cache/fscache", "semanage fcontext -a -e /var/cache/fscache /path/to/cache # restorecon -Rv /path/to/cache", "semanage permissive -a cachefilesd_t # semanage permissive -a cachefiles_kernel_t", "tune2fs -o user_xattr /dev/ device", "mount /dev/ device /path/to/cache -o user_xattr", "systemctl start cachefilesd", "systemctl enable cachefilesd" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/fscachesetup
Chapter 4. Setting up IdM Replicas
Chapter 4. Setting up IdM Replicas Replicas are essentially clones of existin Identity Management servers, and they share identical core configuration. The replica installation process, then, has two major parts: copying the existing, required server configuration and then installing the replica based on that information. 4.1. Planning the Server/Replica Topologies In the IdM domain, there are three types of machines: Servers, which manage all of the services used by domain members Replicas, which are essentially copies of servers (and, once copied, are identical to servers) Clients, which belong to the Kerberos domains, receive certificates and tickets issued by the servers, and use other centralized services for authentication and authorization A replica is a clone of a specific IdM server. The server and replica share the same internal information about users, machines, certificates, and configured policies. These data are copied from the server to the replica in a process called replication . The two Directory Server instances used by an IdM server - the Directory Server instance used by the IdM server as a data store and the Directory Server instance used by the Dogtag Certificate System to store certificate information - are replicated over to corresponding consumer Directory Server instances used by the IdM replica. The different Directory Server instances recognize each other through replication agreements . An initial replication agreement is created between a master server and replica when the replica is created; additional agreements can be added to other servers or replicas using the ipa-replica-manage command. Figure 4.1. Server and Replica Agreements Once they are installed, replicas are functionally identical to servers. There are some guidelines with multi-master replication which place restrictions on the overall server/replica topology. No more than four replication agreements can be configured on a single server/replica. No more than 20 servers and replicas should be involved in a single Identity Management domain. Every server/replica should have a minimum of two replication agreements to ensure that there are no orphan servers or replicas cut out of the IdM domain if another server fails. One of the most resilient topologies is to create a cell configuration for the servers/replicas, where there are a small number of servers in a cell which all have replication agreements with each other (a tight cell), and then each server has one replication agreement with another server outside the cell, loosely coupling that cell to every other cell in the overall domain. Figure 4.2. Example Topology There are some recommendations on how to accomplish this easily: Have at least one IdM server in each main office, data center, or locality. Preferably, have two IdM servers. Do not have more than four servers per data center. Rather than using a server or replica, small offices can use SSSD to cache credentials and use an off-site IdM server as its data backend.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/setting_up_ipa_replicas
probe::ioscheduler_trace.elv_requeue_request
probe::ioscheduler_trace.elv_requeue_request Name probe::ioscheduler_trace.elv_requeue_request - Fires when a request is Synopsis ioscheduler_trace.elv_requeue_request Values rq Address of request. name Name of the probe point elevator_name The type of I/O elevator currently enabled. rq_flags Request flags. disk_minor Disk minor number of request. disk_major Disk major no of request. Description put back on the queue, when the hadware cannot accept more requests.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-elv-requeue-request
Chapter 9. Managing Directory Quotas
Chapter 9. Managing Directory Quotas Warning Quota is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support Quota in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Quotas allow you to set limits on the disk space used by a directory. Storage administrators can control the disk space utilization at the directory and volume levels. This is particularly useful in cloud deployments to facilitate the use of utility billing models. 9.1. Enabling and Disabling Quotas To limit disk usage, you need to enable quota usage on a volume by running the following command: This command only enables quota behavior on the volume; it does not set any default disk usage limits. Note On a gluster volume with quota enabled, the CPU and memory consumption accelarates based on various factors. For example, complexity of the file system tree, number of bricks, nodes in the pool, number of quota limits placed across the filesystem, and the frequency of quota traversals across the filesystem. To disable quota behavior on a volume, including any set disk usage limits, run the following command: Important When you disable quotas on Red Hat Gluster Storage 3.1.1 and earlier, all previously configured limits are removed from the volume by a cleanup process, quota-remove-xattr.sh . If you re-enable quotas while the cleanup process is still running, the extended attributes that enable quotas may be removed by the cleanup process. This has negative effects on quota accounting.
[ "gluster volume quota VOLNAME enable", "gluster volume quota VOLNAME disable" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_directory_quotas
7. Technology Previews
7. Technology Previews Technology Preview features are currently not supported under Red Hat Enterprise Linux 4.8 subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a technology preview feature before it becomes fully supported. Erratas will be provided for high-severity security issues. During the development of a technology preview feature, additional components may become available to the public for testing. It is the intention of Red Hat to fully support technology preview features in a future release. For more information on the scope of Technology Previews in Red Hat Enterprise Linux, please view the Technology Preview Features Support Scope page on the Red Hat website. OpenOffice 2.0 OpenOffice 2.0 is now included in this release as a Technology Preview. This suite features several improvements, including ODF and PDF functionalities, support for digital signatures and greater compatibility with open suites in terms of format and interface. In addition to this, the OpenOffice 2.0 spreadsheet has enhanced pivot table support, and can now handle up to 65,000 rows. For more information about OpenOffice 2.0 , please refer to http://www.openoffice.org/dev_docs/features/2.0/index.html .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.8_release_notes/ar01s07
Chapter 4. Technology preview
Chapter 4. Technology preview This section describes Technology Preview features in AMQ Broker 7.8. Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them for production. For more information, see Red Hat Technology Preview Features Support Scope . AMQP server connections In AMQ Broker 7.8, a broker can initiate connections to other endpoints using the AMQP protocol. This means, for example, that the broker can connect to other AMQP servers (not necessarily instances of AMQ Broker) and create elements on those connections. For more information, see Broker Connections in the Apache ActiveMQ Artemis documentation. Viewing brokers in Fuse Console You can configure an Operator-based broker deployment to use Fuse Console for OpenShift instead of AMQ Management Console. When you have configured your broker deployment appropriately, Fuse Console discovers the brokers and displays them on a dedicated Artemis tab. For more information, see Viewing brokers in Fuse Console in Deploying AMQ Broker on OpenShift . Note Viewing brokers in Fuse Console is a Technology Preview feature for Fuse 7.8 .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_red_hat_amq_broker_7.8/tech_preview
Appendix D. Host parameter hierarchy
Appendix D. Host parameter hierarchy You can access host parameters when provisioning hosts. Hosts inherit their parameters from the following locations, in order of increasing precedence: Parameter Level Set in Satellite web UI Globally defined parameters Configure > Global parameters Organization-level parameters Administer > Organizations Location-level parameters Administer > Locations Domain-level parameters Infrastructure > Domains Subnet-level parameters Infrastructure > Subnets Operating system-level parameters Hosts > Provisioning Setup > Operating Systems Host group-level parameters Configure > Host Groups Host parameters Hosts > All Hosts
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/host_parameter_hierarchy_provisioning
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_openstack_identity_with_external_user_management_services/making-open-source-more-inclusive
Chapter 7. Uploading files to your bucket
Chapter 7. Uploading files to your bucket To upload files to your bucket from your workbench, use the upload_file() method. Prerequisites You have cloned the odh-doc-examples repository to your workbench. You have opened the s3client_examples.ipynb file in your workbench. You have installed Boto3 and configured an S3 client. You have imported the files that you want to upload to your object store to your workbench. Procedure In the notebook, locate the instructions to upload files to a bucket. Replace <file_name> , <bucket_name> and <object_name> with your own values, as shown in the example, and then run the code cell. Verification Locate the following instructions to list files in a bucket: Replace <bucket_name> with the name of your bucket, as shown in the example, and then run the code cell. The file that you uploaded is displayed in the output.
[ "#Upload file to bucket #Replace <file_name>, <bucket_name>, and <object_name> with your values. #<file_name>: Name of the file to upload. This must include the full local path to the file on your notebook. #<bucket_name>: The name of the bucket to upload the file to. #<object_name>: The full key to use to save the file to the bucket. s3_client.upload_file('<file_name>', '<bucket_name>', '<object_name>')", "s3_client.upload_file('image-973-series123.csv', 'aqs973-image-registry', '/tmp/image-973-series124.csv')", "#Upload Verification for key in s3_client.list_objects_v2(Bucket='<bucket_name>')['Contents']: print(key['Key'])", "#Upload Verification for key in s3_client.list_objects_v2(Bucket='aqs973-image-registry')['Contents']: print(key['Key'])" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/uploading-files-to-available-amazon-s3-buckets-using-notebook-cells_s3
Chapter 77. Kubernetes Persistent Volume
Chapter 77. Kubernetes Persistent Volume Since Camel 2.17 Only producer is supported The Kubernetes Persistent Volume component is one of the Kubernetes Components which provides a producer to execute Kubernetes Persistent Volume operations. 77.1. Dependencies When using kubernetes-persistent-volumes with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 77.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 77.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 77.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 77.3. Component Options The Kubernetes Persistent Volume component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 77.4. Endpoint Options The Kubernetes Persistent Volume endpoint is configured using URI syntax: with the following path and query parameters: 77.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 77.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 77.5. Message Headers The Kubernetes Persistent Volume component supports 3 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesPersistentVolumesLabels (producer) Constant: KUBERNETES_PERSISTENT_VOLUMES_LABELS The persistent volume labels. Map CamelKubernetesPersistentVolumeName (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_NAME The persistent volume name. String 77.6. Supported producer operation listPersistentVolumes listPersistentVolumesByLabels getPersistentVolume 77.7. Kubernetes Persistent Volumes Producer Examples listPersistentVolumes: this operation lists the pv on a kubernetes cluster. from("direct:list"). toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumes"). to("mock:result"); This operation returns a List of pv from your cluster. listPersistentVolumesByLabels: this operation lists the pv by labels on a kubernetes cluster from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_LABELS, labels); } }); toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesByLabels"). to("mock:result"); This operation returns a List of pv from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 77.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-persistent-volumes:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumes\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_LABELS, labels); } }); toF(\"kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-persistent-volume-component-starter
Chapter 5. Installing and uninstalling JBoss EAP using archive installation method
Chapter 5. Installing and uninstalling JBoss EAP using archive installation method 5.1. Downloading and installing JBoss EAP using archive installation method You can download and install JBoss EAP using the archive installation method on Red Hat Enterprise Linux RHEL and on Microsoft Windows server. Prerequisites You are on a supported operating system. You have set up an account on the Red Hat Customer Portal. You have Installed a supported Java Development Kit (JDK). You have reviewed the JBoss EAP 8 supported configurations and have ensured that your system is supported. Microsoft Windows only: You have set the JAVA_HOME and PATH environment variables. Note If you do not set up the environment variables the shortcuts will not work. Procedure Log in to the Red Hat Customer Portal. Click Downloads Select Red Hat JBoss Enterprise Application Platform from the Product Downloads list. In the Version drop-down list, select 8.0 . Find Red Hat JBoss Enterprise Application Platform 8.0.0 in the list and click the Download link. Optional: Move the archive file to the server and location where you want to install JBoss EAP. Depending on your operating system, choose one of the following options: For Red Hat Enterprise Linux, extract the archive file by entering the following command in the management CLI: USD unzip jboss-eap-8.0.0.zip For Windows Server, right-click the archive file and select Extract All . Note EAP_HOME is the top-level directory for the JBoss EAP installation. Create the directory by extracting the archive file. Additional resources For more information about configuring your JBoss EAP server, see our configuration guide. 5.2. Uninstalling JBoss EAP archive installation Depending on your work environment, the archive installation method might not meet the needs of your environment. You can remove the instance of JBoss EAP and any services associated with it. Thereafter, you can install JBoss EAP using a suitable installation method. Prerequisites You have archived any modified configuration files and deployments that you can reuse at a later stage. Procedure Delete the installation directory to uninstall JBoss EAP. Delete any scripts that depend on JBoss EAP being installed on your machine.
[ "unzip jboss-eap-8.0.0.zip" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/assembly_installing-jboss-eap-using-archive-installer_default
15.6.2. Useful Websites
15.6.2. Useful Websites http://vsftpd.beasts.org/ - The vsftpd project page is a great place to locate the latest documentation and to contact the author of the software. http://slacksite.com/other/ftp.html - This website provides a concise explanation of the differences between active and passive mode FTP. http://war.jgaa.com/ftp/?cmd=rfc - A comprehensive list of Request for Comments ( RFCs ) related to the FTP protocol.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ftp-useful-websites
Chapter 46. l2gw
Chapter 46. l2gw This chapter describes the commands under the l2gw command. 46.1. l2gw connection create Create l2gateway-connection Usage: Table 46.1. Positional arguments Value Summary <GATEWAY-NAME/UUID> Descriptive name for logical gateway. <NETWORK-NAME/UUID> Network name or uuid. Table 46.2. Command arguments Value Summary -h, --help Show this help message and exit --default-segmentation-id SEG_ID Default segmentation-id that will be applied to the interfaces for which segmentation id was not specified in l2-gateway-create command. Table 46.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.2. l2gw connection delete Delete a given l2gateway-connection Usage: Table 46.7. Positional arguments Value Summary <L2_GATEWAY_CONNECTIONS> Id(s) of l2_gateway_connections(s) to delete. Table 46.8. Command arguments Value Summary -h, --help Show this help message and exit 46.3. l2gw connection list List l2gateway-connections Usage: Table 46.9. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 46.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 46.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 46.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.4. l2gw connection show Show information of a given l2gateway-connection Usage: Table 46.14. Positional arguments Value Summary <L2_GATEWAY_CONNECTION> Id of l2_gateway_connection to look up. Table 46.15. Command arguments Value Summary -h, --help Show this help message and exit Table 46.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.5. l2gw create Create l2gateway resource Usage: Table 46.20. Positional arguments Value Summary <GATEWAY-NAME> Descriptive name for logical gateway. Table 46.21. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --device name=name,interface_names=INTERFACE-DETAILS Device name and interface-names of l2gateway. INTERFACE-DETAILS is of form "<interface_name1>;[<inte rface_name2>][|<seg_id1>[#<seg_id2>]]" (--device option can be repeated) Table 46.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.6. l2gw delete Delete a given l2gateway Usage: Table 46.26. Positional arguments Value Summary <L2_GATEWAY> Id(s) or name(s) of l2_gateway to delete. Table 46.27. Command arguments Value Summary -h, --help Show this help message and exit 46.7. l2gw list List l2gateway that belongs to a given tenant Usage: Table 46.28. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 46.29. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 46.30. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 46.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.8. l2gw show Show information of a given l2gateway Usage: Table 46.33. Positional arguments Value Summary <L2_GATEWAY> Id or name of l2_gateway to look up. Table 46.34. Command arguments Value Summary -h, --help Show this help message and exit Table 46.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.9. l2gw update Update a given l2gateway Usage: Table 46.39. Positional arguments Value Summary <L2_GATEWAY> Id or name of l2_gateway to update. Table 46.40. Command arguments Value Summary -h, --help Show this help message and exit --name name Descriptive name for logical gateway. --device name=name,interface_names=INTERFACE-DETAILS Device name and interface-names of l2gateway. INTERFACE-DETAILS is of form "<interface_name1>;[<inte rface_name2>][|<seg_id1>[#<seg_id2>]]" (--device option can be repeated) Table 46.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack l2gw connection create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--default-segmentation-id SEG_ID] <GATEWAY-NAME/UUID> <NETWORK-NAME/UUID>", "openstack l2gw connection delete [-h] <L2_GATEWAY_CONNECTIONS> [<L2_GATEWAY_CONNECTIONS> ...]", "openstack l2gw connection list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>]", "openstack l2gw connection show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <L2_GATEWAY_CONNECTION>", "openstack l2gw create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--project-domain <project-domain>] [--device name=name,interface_names=INTERFACE-DETAILS] <GATEWAY-NAME>", "openstack l2gw delete [-h] <L2_GATEWAY> [<L2_GATEWAY> ...]", "openstack l2gw list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>]", "openstack l2gw show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <L2_GATEWAY>", "openstack l2gw update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name name] [--device name=name,interface_names=INTERFACE-DETAILS] <L2_GATEWAY>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/l2gw
Chapter 8. Configuring Red Hat High Availability Manually
Chapter 8. Configuring Red Hat High Availability Manually This chapter describes how to configure Red Hat High Availability Add-On software by directly editing the cluster configuration file ( /etc/cluster/cluster.conf ) and using command-line tools. The chapter provides procedures about building a configuration file one section at a time, starting with a sample file provided in the chapter. As an alternative to starting with a sample file provided here, you could copy a skeleton configuration file from the cluster.conf man page. However, doing so would not necessarily align with information provided in subsequent procedures in this chapter. There are other ways to create and configure a cluster configuration file; this chapter provides procedures about building a configuration file one section at a time. Also, keep in mind that this is just a starting point for developing a configuration file to suit your clustering needs. This chapter consists of the following sections: Section 8.1, "Configuration Tasks" Section 8.2, "Creating a Basic Cluster Configuration File" Section 8.3, "Configuring Fencing" Section 8.4, "Configuring Failover Domains" Section 8.5, "Configuring HA Services" Section 8.7, "Configuring Debug Options" Section 8.6, "Configuring Redundant Ring Protocol" Section 8.9, "Verifying a Configuration" Important Make sure that your deployment of High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. In addition, allow time for a configuration burn-in period to test failure modes. Important This chapter references commonly used cluster.conf elements and attributes. For a comprehensive list and description of cluster.conf elements and attributes, see the cluster schema at /usr/share/cluster/cluster.rng , and the annotated schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html (for example /usr/share/doc/cman-3.0.12/cluster_conf.html ). Important Certain procedure in this chapter call for using the cman_tool version -r command to propagate a cluster configuration throughout a cluster. Using that command requires that ricci is running. Using ricci requires a password the first time you interact with ricci from any specific machine. For information on the ricci service, refer to Section 3.13, "Considerations for ricci " . Note Procedures in this chapter may include specific commands for some of the command-line tools listed in Appendix E, Command Line Tools Summary . For more information about all commands and variables, see the man page for each command-line tool. 8.1. Configuration Tasks Configuring Red Hat High Availability Add-On software with command-line tools consists of the following steps: Creating a cluster. Refer to Section 8.2, "Creating a Basic Cluster Configuration File" . Configuring fencing. Refer to Section 8.3, "Configuring Fencing" . Configuring failover domains. Refer to Section 8.4, "Configuring Failover Domains" . Configuring HA services. Refer to Section 8.5, "Configuring HA Services" . Verifying a configuration. Refer to Section 8.9, "Verifying a Configuration" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-config-cli-ca
14.12.6.2. Downloading the contents from a storage volume
14.12.6.2. Downloading the contents from a storage volume This command command downloads the contents of local-file from a storage volume. It requires a --pool pool-or-uuid which is the name or UUID of the storage pool that the volume is in. It also requires vol-name-or-key-or-path which is the name or key or path of the volume to wipe. Using the option --offset dictates the position in the storage volume at which to start reading the data. --length length dictates an upper limit for the amount of data to be downloaded.
[ "vol-download --pool pool-or-uuid --offset bytes --length bytes vol-name-or-key-or-path local-file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-uploading_and_downloading_storage_volumes-downloading_the_contents_from_a_storage_volume
Chapter 1. About Observability
Chapter 1. About Observability Red Hat OpenShift Observability provides real-time visibility, monitoring, and analysis of various system metrics, logs, traces, and events to help users quickly diagnose and troubleshoot issues before they impact systems or applications. To help ensure the reliability, performance, and security of your applications and infrastructure, OpenShift Container Platform offers the following Observability components: Monitoring Logging Distributed tracing Red Hat build of OpenTelemetry Network Observability Power monitoring Red Hat OpenShift Observability connects open-source observability tools and technologies to create a unified Observability solution. The components of Red Hat OpenShift Observability work together to help you collect, store, deliver, analyze, and visualize data. Note With the exception of monitoring, Red Hat OpenShift Observability components have distinct release cycles separate from the core OpenShift Container Platform release cycles. See the Red Hat OpenShift Operator Life Cycles page for their release compatibility. 1.1. Monitoring Monitor the in-cluster health and performance of your applications running on OpenShift Container Platform with metrics and customized alerts for CPU and memory usage, network connectivity, and other resource usage. Monitoring stack components are deployed and managed by the Cluster Monitoring Operator. Monitoring stack components are deployed by default in every OpenShift Container Platform installation and are managed by the Cluster Monitoring Operator (CMO). These components include Prometheus, Alertmanager, Thanos Querier, and others. The CMO also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. For more information, see About OpenShift Container Platform monitoring and About remote health monitoring . 1.2. Logging Collect, visualize, forward, and store log data to troubleshoot issues, identify performance bottlenecks, and detect security threats. In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. 1.3. Distributed tracing Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use it for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. For more information, see Distributed tracing architecture . 1.4. Red Hat build of OpenTelemetry Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open-source back ends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate. For more information, see Red Hat build of OpenTelemetry . 1.5. Network Observability Observe the network traffic for OpenShift Container Platform clusters and create network flows with the Network Observability Operator. View and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see Network Observability overview . 1.6. Power monitoring Monitor the power usage of workloads and identify the most power-consuming namespaces running in a cluster with key power consumption metrics, such as CPU or DRAM measured at the container level. Visualize energy-related system statistics with the Power monitoring Operator. For more information, see Power monitoring overview .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/observability_overview/observability-overview
Chapter 10. Authenticating in the API
Chapter 10. Authenticating in the API You can use the following authentication methods in the API: Session authentication Basic authentication OAuth 2 token authentication Single sign-on authentication Automation controller is designed for organizations to centralize and control their automation with a visual dashboard for out-of-the box control while providing a REST API to integrate with your other tools on a deeper level. Automation controller supports several authentication methods to make it easy to embed automation controller into existing tools and processes. This ensures that the right people can access its resources. 10.1. Using session authentication You can use session authentication when logging in directly to the automation controller's API or UI to manually create resources, such as inventories, projects, and job templates and start jobs in the browser. With this method, you can remain logged in for a prolonged period of time, not just for that HTTP request. For example, when browsing the API or UI in a browser such as Chrome or Mozilla Firefox. When a user logs in, a session cookie is created, this means that they can remain logged in when navigating to different pages within automation controller. The following image represents the communication that occurs between the client and server in a session: Use the curl tool to see the activity that occurs when you log in to automation controller. Procedure Use GET to go to the /api/login/ endpoint to get the csrftoken cookie: curl -k -c - https://<controller-host>/api/login/ localhost FALSE / FALSE 0 csrftoken AswSFn5p1qQvaX4KoRZN6A5yer0Pq0VG2cXMTzZnzuhaY0L4tiidYqwf5PXZckuj POST to the /api/login/ endpoint with username, password, and X-CSRFToken=<token-value> : curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \ --referer https://<awx-host>/api/login/ \ -H 'X-CSRFToken: K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' \ --data 'username=root&password=reverse' \ --cookie 'csrftoken=K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' \ https://<awx-host>/api/login/ -k -D - -o /dev/null All of this is done by automation controller when you log in to the UI or API in the browser, and you must only use it when authenticating in the browser. For programmatic integration with automation controller, see OAuth2 token authentication . The following is an example of a typical response: Server: nginx Date: <current date> Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Location: /accounts/profile/ X-API-Session-Cookie-Name: awx_sessionid Expires: <date> Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private Vary: Cookie, Accept-Language, Origin Session-Timeout: 1800 Content-Language: en X-API-Total-Time: 0.377s X-API-Request-Id: 700826696425433fb0c8807cd40c00a0 Access-Control-Expose-Headers: X-API-Request-Id Set-Cookie: userLoggedIn=true; Path=/ Set-Cookie: current_user=<user cookie data>; Path=/ Set-Cookie: csrftoken=<csrftoken>; Path=/; SameSite=Lax Set-Cookie: awx_sessionid=<your session id>; expires=<date>; HttpOnly; Max-Age=1800; Path=/; SameSite=Lax Strict-Transport-Security: max-age=15768000 When a user is successfully authenticated with this method, the server responds with a header called X-API-Session-Cookie-Name , indicating the configured name of the session cookie. The default value is awx_session_id which you can see later in the Set-Cookie headers. Note You can change the session expiration time by specifying it in the SESSION_COOKIE_AGE parameter. 10.2. Basic authentication Basic authentication is stateless, therefore, you must send the base64-encoded username and password along with each request through the Authorization header. You can use this for API calls from curl requests, python scripts, or individual requests to the API. We recommend OAuth 2 Token Authentication for accessing the API when at all possible. The following is an example of Basic authentication with curl: # the --user flag adds this Authorization header for us curl -X GET --user 'user:password' https://<controller-host>/api/v2/credentials -k -L Additional resources For more information about Basic authentication, see The 'Basic' HTTP Authentication Scheme . Disabling Basic authentication You can disable Basic authentication for security purposes. Procedure From the navigation panel, select Settings Platform gateway . Click Edit platform gateway settings . Disable the option Gateway basic auth enabled . 10.3. OAuth 2 token authentication OAuth (Open Authorization) is an open standard for token-based authentication and authorization. OAuth 2 authentication is commonly used when interacting with the automation controller API programmatically. Similar to Basic authentication, you are given an OAuth 2 token with each API request through the Authorization header. Unlike Basic authentication, OAuth 2 tokens have a configurable timeout and are scopable. Tokens have a configurable expiration time and can be easily revoked for one user or for the entire automation controller system by an administrator if needed. You can do this with the revoke_oauth2_tokens management command, or by using the API as explained in Revoke an access token . The different methods for obtaining OAuth2 access tokens in automation controller include the following: Personal access tokens (PAT) Application token: Password grant type Application token: Implicit grant type Application token: Authorization Code grant type A user needs to create an OAuth 2 token in the API or in the Access Management OAuth Applications tab of the platform gateway UI. For more information about creating tokens through the UI, see Adding tokens . For the purpose of this example, use the PAT method for creating a token in the API. After you create it, you can set the scope. Note You can configure the expiration time of the token system-wide. For more information, see Configuring access to external applications with token-based authentication . Token authentication is best used for any programmatic use of the automation controller API, such as Python scripts or tools such as curl. Curl example This call returns JSON data with the following: You can use the value of the token property to perform a GET request for an automation controller resource, such as Hosts: curl -k -X POST \ -H "Content-Type: application/json" -H "Authorization: Bearer <oauth2-token-value>" \ https://<controller-host>/api/v2/hosts/ You can also run a job by making a POST to the job template that you want to start: curl -k -X POST \ -H "Authorization: Bearer <oauth2-token-value>" \ -H "Content-Type: application/json" \ --data '{"limit" : "ansible"}' \ https://<controller-host>/api/v2/job_templates/14/launch/ Python example awxkit is an open source tool that makes it easy to use HTTP requests to access the automation controller API. You can have awxkit get a PAT on your behalf by using the awxkit login command. For more information, see AWX Command Line Interface . If you need to write custom requests, you can write a Python script by using Python library requests, such as the following example: import requests oauth2_token_value = 'y1Q8ye4hPvT61aQq63Da6N1C25jiA' # your token value from controller url = 'https://<controller-host>/api/v2/users/' payload = {} headers = {'Authorization': 'Bearer ' + oauth2_token_value,} # makes request to controller user endpoint response = requests.request('GET', url, headers=headers, data=payload, allow_redirects=False, verify=False) # prints json returned from controller with formatting print(json.dumps(response.json(), indent=4, sort_keys=True)) Additional resources For more information about obtaining OAuth2 access tokens and how to use OAuth 2 in the context of external applications, see Configuring access to external applications with token-based authentication . Enabling external users to create OAuth 2 tokens By default, external users such as those created by single sign-on are not able to generate OAuth tokens for security purposes. Procedure From the navigation panel, select Settings Platform gateway . Select Edit platform gateway settings . Enable the option to Allow external users to create OAuth2 tokens . 10.4. Single sign-on authentication Single sign-on (SSO) authentication methods are different from other methods because the authentication of the user happens external to automation controller, such as Google SSO, Microsoft Azure SSO, SAML, or GitHub. For example, with GitHub SSO, GitHub is the single source of truth, which verifies your identity based on the username and password you gave automation controller. You can configure SSO authentication by using automation controller inside a large organization with a central Identity Provider. Once you have configured an SSO method in automation controller, an option for that SSO is available on the login screen. If you click that option, it redirects you to the Identity Provider, in this case GitHub, where you present your credentials. If the Identity Provider verifies you successfully, automation controller makes a user linked to your GitHub user (if this is your first time logging in with this SSO method), and logs you in. Additional resources For the various types of supported SSO authentication methods, see Configuring an authentication type .
[ "curl -k -c - https://<controller-host>/api/login/ localhost FALSE / FALSE 0 csrftoken AswSFn5p1qQvaX4KoRZN6A5yer0Pq0VG2cXMTzZnzuhaY0L4tiidYqwf5PXZckuj", "curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' --referer https://<awx-host>/api/login/ -H 'X-CSRFToken: K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' --data 'username=root&password=reverse' --cookie 'csrftoken=K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' https://<awx-host>/api/login/ -k -D - -o /dev/null", "Server: nginx Date: <current date> Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Location: /accounts/profile/ X-API-Session-Cookie-Name: awx_sessionid Expires: <date> Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private Vary: Cookie, Accept-Language, Origin Session-Timeout: 1800 Content-Language: en X-API-Total-Time: 0.377s X-API-Request-Id: 700826696425433fb0c8807cd40c00a0 Access-Control-Expose-Headers: X-API-Request-Id Set-Cookie: userLoggedIn=true; Path=/ Set-Cookie: current_user=<user cookie data>; Path=/ Set-Cookie: csrftoken=<csrftoken>; Path=/; SameSite=Lax Set-Cookie: awx_sessionid=<your session id>; expires=<date>; HttpOnly; Max-Age=1800; Path=/; SameSite=Lax Strict-Transport-Security: max-age=15768000", "the --user flag adds this Authorization header for us curl -X GET --user 'user:password' https://<controller-host>/api/v2/credentials -k -L", "curl -u user:password -k -X POST https://<controller-host>/api/v2/tokens/", "curl -k -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer <oauth2-token-value>\" https://<controller-host>/api/v2/hosts/", "curl -k -X POST -H \"Authorization: Bearer <oauth2-token-value>\" -H \"Content-Type: application/json\" --data '{\"limit\" : \"ansible\"}' https://<controller-host>/api/v2/job_templates/14/launch/", "import requests oauth2_token_value = 'y1Q8ye4hPvT61aQq63Da6N1C25jiA' # your token value from controller url = 'https://<controller-host>/api/v2/users/' payload = {} headers = {'Authorization': 'Bearer ' + oauth2_token_value,} makes request to controller user endpoint response = requests.request('GET', url, headers=headers, data=payload, allow_redirects=False, verify=False) prints json returned from controller with formatting print(json.dumps(response.json(), indent=4, sort_keys=True))" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_execution_api_overview/controller-api-auth-methods
probe::nfs.proc.rename_done
probe::nfs.proc.rename_done Name probe::nfs.proc.rename_done - NFS client response to a rename RPC task Synopsis nfs.proc.rename_done Values timestamp V4 timestamp, which is used for lease renewal status result of last operation server_ip IP address of server prot transfer protocol version NFS version old_fh file handle of old parent dir new_fh file handle of new parent dir Description Fires when a reply to a rename RPC task is received or some rename error occurs (timeout or socket shutdown).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-proc-rename-done
Chapter 4. Adding user preferences
Chapter 4. Adding user preferences You can change the default preferences for your profile to meet your requirements. You can set your default project, topology view (graph or list), editing medium (form or YAML), language preferences, and resource type. The changes made to the user preferences are automatically saved. 4.1. Setting user preferences You can set the default user preferences for your cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Use the masthead to access the user preferences under the user profile. In the General section: In the Theme field, you can set the theme that you want to work in. The console defaults to the selected theme each time you log in. In the Perspective field, you can set the default perspective you want to be logged in to. You can select the Administrator or the Developer perspective as required. If a perspective is not selected, you are logged into the perspective you last visited. In the Project field, select a project you want to work in. The console defaults to the project every time you log in. In the Topology field, you can set the topology view to default to the graph or list view. If not selected, the console defaults to the last view you used. In the Create/Edit resource method field, you can set a preference for creating or editing a resource. If both the form and YAML options are available, the console defaults to your selection. In the Language section, select Default browser language to use the default browser language settings. Otherwise, select the language that you want to use for the console. In the Notifications section, you can toggle display notifications created by users for specific projects on the Overview page or notification drawer. In the Applications section: You can view the default Resource type . For example, if the OpenShift Serverless Operator is installed, the default resource type is Serverless Deployment . Otherwise, the default resource type is Deployment . You can select another resource type to be the default resource type from the Resource Type field.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/web_console/adding-user-preferences
Chapter 56. Installing and running the headless Process Automation Manager controller with Oracle WebLogic Server
Chapter 56. Installing and running the headless Process Automation Manager controller with Oracle WebLogic Server To use the KIE Server REST API or Java Client API to interact with KIE Server, install the headless Process Automation Manager controller with Oracle WebLogic Server. The headless Process Automation Manager controller manages KIE Server configuration in a centralized way so that you can use the headless Process Automation Manager controller to create and maintain containers and perform other server-level tasks. Prerequisites The Oracle WebLogic Server instance is configured as described in Chapter 54, Configuring Oracle WebLogic Server for KIE Server . KIE Server is installed on the Oracle WebLogic Server instance. You have sufficient user permissions to complete the installation. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Add-Ons . Extract the downloaded rhpam-7.13.5-add-ons.zip file to a temporary directory. In the WebLogic Administration Console, navigate to Security Realms Users and Groups . In the kie-server group that you created previously, create a user for the headless Process Automation Manager controller, such as controller , and a password for this new user and click OK . For more information about creating groups and users, see Section 54.1, "Configuring the KIE Server group and users" . Navigate to Deployments to view all existing applications. Click Install . Navigate to the temporary directory where you downloaded and extracted the rhpam-7.13.5-add-ons.zip file, and go to rhpam-7.13.5-add-ons/rhpam-7.13.5-controller-ee7.zip/controller.war . Select the controller.war file and click to continue. Select Install this deployment as an application as the targeting style and click . Keep the application name as controller and set the security model to DD Only . Leave the remaining options as default and click to continue. In the Additional Configuration section, choose No, I will review the configuration later and click Finish . 56.1. Setting system properties for the headless Process Automation Manager controller After you install the headless Process Automation Manager controller, set the system properties listed in this section on your application server or servers to enable proper interaction with the headless Process Automation Manager controller. Note For optimal results, install KIE Server and the headless Process Automation Manager controller on different servers in production environments. In development environments, you can install KIE Server and the headless Process Automation Manager controller on the same server. In either case, be sure to make these property changes on all application servers where the headless Process Automation Manager controller is installed. Prerequisites KIE Server and the headless Process Automation Manager controller are installed on the application server instance. Procedure Specify the following JVM property values on the application server instance where the headless Process Automation Manager controller is installed: Table 56.1. Required properties for the headless Process Automation Manager controller Name Requirement org.kie.server.user A user with the kie-server role org.kie.server.pwd The password for the user specified in the org.kie.server.user property Specify the following JVM property values on the application server instance where KIE Server is installed: Table 56.2. Required properties for KIE Server when headless Process Automation Manager controller is installed Name Requirement org.kie.server.controller.user A user with the kie-server role org.kie.server.controller.pwd The password for the user specified for the org.kie.server.controller.user property org.kie.server.id The ID or name of the KIE Server installation, such as rhdm700-decision-server-1 org.kie.server.location The URL of KIE Server, http://<HOST>:<PORT>/kie-server/services/rest/server org.kie.server.controller The URL of the headless Process Automation Manager controller, http://<HOST>:<PORT>/controller/rest/controller <HOST> is the ID or name of the KIE Server host, for example, localhost or 192.7.8.9 . <PORT> is the port number of the KIE Server host, for example, 7001 . 56.2. Verifying the installation After you install the headless Process Automation Manager controller and define the required system properties and role requirements on the application server, verify that the headless Process Automation Manager controller works correctly. Prerequisites KIE Server and the headless Process Automation Manager controller are installed on the application server instance. You have set all required system properties and role requirements for the headless Process Automation Manager controller on the application server. Procedure In your command terminal, enter the following command to verify that the headless Process Automation Manager controller is working: <HOST> is the ID or name of the headless Process Automation Manager controller host, for example, localhost or 192.7.8.9 . <PORT> is the port number of the headless Process Automation Manager controller host, for example, 7001 . <CONTROLLER> and <CONTROLLER_PWD> are the user credentials that you created in this section. The command should return information about the KIE Server instance. Note Alternatively, you can use the KIE Server Java API Client to access the headless Process Automation Manager controller. If the headless Process Automation Manager controller is not running, stop and restart the application server instance and try again to access the headless Process Automation Manager controller URL or API.
[ "curl -X GET \"http://<HOST>:<PORT>/controller/rest/controller/management/servers\" -H \"accept: application/xml\" -u '<CONTROLLER>:<CONTROLLER_PWD>'" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/controller-wls-install-proc
Preface
Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7.6 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. Capabilities and limits of Red Hat Enterprise Linux 7 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . Packages distributed with this release are listed in Red Hat Enterprise Linux 7 Package Manifest . Migration from Red Hat Enterprise Linux 6 is documented in the Migration Planning Guide. For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/pref-release_notes-preface
Chapter 11. Using the User Operator to manage Kafka users
Chapter 11. Using the User Operator to manage Kafka users When you create, modify or delete a user using the KafkaUser resource, the User Operator ensures that these changes are reflected in the Kafka cluster. For more information on the KafkaUser resource, see the KafkaUser schema reference . 11.1. Configuring Kafka users Use the properties of the KafkaUser resource to configure Kafka users. You can use oc apply to create or modify users, and oc delete to delete existing users. For example: oc apply -f <user_config_file> oc delete KafkaUser <user_name> Users represent Kafka clients. When you configure Kafka users, you enable the user authentication and authorization mechanisms required by clients to access Kafka. The mechanism used must match the equivalent Kafka configuration. For more information on using Kafka and KafkaUser resources to secure access to Kafka brokers, see Securing access to Kafka brokers . Prerequisites A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption. A running User Operator (typically deployed with the Entity Operator). Procedure Configure the KafkaUser resource. This example specifies mTLS authentication and simple authorization using ACLs. Example Kafka user configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: "*" - resource: type: group name: my-group patternType: literal operations: - Read host: "*" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: "*" Create the KafkaUser resource in OpenShift. oc apply -f <user_config_file> Wait for the ready status of the user to change to True : oc get kafkausers -o wide -w -n <namespace> Kafka user status NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True User creation is successful when the READY output shows True . If the READY column stays blank, get more details on the status from the resource YAML or User Operator logs. Messages provide details on the reason for the current status. oc get kafkausers my-user-2 -o yaml Details on a user with a NotReady status # ... status: conditions: - lastTransitionTime: "2022-06-10T10:07:37.238065Z" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: "True" type: NotReady In this example, the reason the user is not ready is because simple authorization is not enabled in the Kafka configuration. Kafka configuration for simple authorization apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... authorization: type: simple After updating the Kafka configuration, the status shows the user is ready. oc get kafkausers my-user-2 -o wide -w -n <namespace> Status update of the user NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True Fetching the details shows no messages. oc get kafkausers my-user-2 -o yaml Details on a user with a READY status # ... status: conditions: - lastTransitionTime: "2022-06-10T10:33:40.166846Z" status: "True" type: Ready
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: \"*\" - resource: type: group name: my-group patternType: literal operations: - Read host: \"*\" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: \"*\"", "apply -f <user_config_file>", "get kafkausers -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:07:37.238065Z\" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: \"True\" type: NotReady", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: simple", "get kafkausers my-user-2 -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:33:40.166846Z\" status: \"True\" type: Ready" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-using-the-user-operator-str
Appendix B. Job Template Examples and Extensions
Appendix B. Job Template Examples and Extensions Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements. B.1. Customizing Job Templates When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones. The following template combines default templates to install and start the httpd service on Red Hat Enterprise Linux systems: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'httpd' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'httpd' %> The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %> With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set , and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list. B.2. Default Job Template Categories Job template category Description Packages Templates for performing package related actions. Install, update, and remove actions are included by default. Puppet Templates for executing Puppet runs on target hosts. Power Templates for performing power related actions. Restart and shutdown actions are included by default. Commands Templates for executing custom commands on remote hosts. Services Templates for performing service related actions. Start, stop, restart, and status actions are included by default. Katello Templates for performing content related actions. These templates are used mainly from different parts of the Satellite web UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. B.3. Example restorecon Template This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts. In the Satellite web UI, navigate to Hosts > Job templates . Click New Job Template . Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor: restorecon -RvF <%= input("directory") %> The <%= input("directory") %> string is replaced by a user-defined directory during job invocation. On the Job tab, set Job category to Commands . Click Add Input to allow job customization. Enter directory to the Name field. The input name must match the value specified in the template editor. Click Required so that the command cannot be executed without the user specified parameter. Select User input from the Input type list. Enter a description to be shown during job invocation, for example Target directory for restorecon . Click Submit . See Executing a restorecon Template on Multiple Hosts for information on how to execute a job based on this template. B.4. Rendering a restorecon Template This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template . This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Run Command - restorecon", :directory => "/home") %> B.5. Executing a restorecon Template on Multiple Hosts This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory. In the Satellite web UI, navigate to Hosts > All hosts and select target hosts. Select Schedule Remote Job from the Select Action list. In the Job invocation page, select the Commands job category and the Run Command - restorecon job template. Type /home in the directory field. Set Schedule to Execute now . Click Submit . You are taken to the Job invocation page where you can monitor the status of job execution. B.6. Including Power Actions in Templates This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents Satellite from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Power Action - SSH Default", :action => "restart") %>
[ "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'httpd' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'httpd' %>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>", "restorecon -RvF <%= input(\"directory\") %>", "<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>", "<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/job_template_examples_and_extensions_managing-hosts
Chapter 16. General Updates
Chapter 16. General Updates A new module helping to upgrade the Tomcat server from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 This update adds a new module to the preupgrade-assistant-el6toel7 package as a Technology Preview. The module helps to upgrade from Tomcat version 6.0.24 in Red Hat Enterprise Linux 6 to Tomcat version 7.0.x in Red Hat Enterprise Linux 7 and provides information about incompatibilities found in the system configuration. When using the module, which is recommended only on non-production machines, several automatic changes are made to the Tomcat configuration files during the postupgrade phase to prevent certain known issues. Note that in the supported scenario, users should remove the tomcat6 packages before upgrading. A new module helping to upgrade Java OpenJDK 7 and Java OpenJDK 8 from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 The preupgrade-assistant-el6toel7 package provides a new module that handles upgrades of Java OpenJDK 7 and Java OpenJDK 8 from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. The module, available as a Technology Preview, informs users about possible requested actions and installs the expected equivalents of the original Java OpenJDK packages on the target system. Note that Java OpenJDK 6 and earlier versions are not handled by the in-place upgrade, but the module informs users about expected risks and required manual actions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/chap-red_hat_enterprise_linux-6.9_technical_notes-technology_previews-general_updates
Chapter 2. Configuring a GCP project
Chapter 2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 2.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 2.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 2.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Tag User The following roles are applied to the service accounts that the control plane and compute machines use: Table 2.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 2.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.globalAddresses.create compute.globalAddresses.get compute.globalAddresses.use compute.globalForwardingRules.create compute.globalForwardingRules.get compute.globalForwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.networks.use compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 2.2. Required permissions for creating load balancer resources compute.backendServices.create compute.backendServices.get compute.backendServices.list compute.backendServices.update compute.backendServices.use compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use compute.targetTcpProxies.create compute.targetTcpProxies.get compute.targetTcpProxies.use Example 2.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 2.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.disks.setLabels compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 2.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 2.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly compute.regionHealthChecks.create compute.regionHealthChecks.get compute.regionHealthChecks.useReadOnly Example 2.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 2.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 2.10. Required IAM permissions for installation iam.roles.create iam.roles.get iam.roles.update Example 2.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 2.12. Optional Images permissions for installation compute.images.list Example 2.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 2.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.addresses.setLabels compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.globalAddresses.delete compute.globalAddresses.list compute.globalForwardingRules.delete compute.globalForwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 2.15. Required permissions for deleting load balancer resources compute.backendServices.delete compute.backendServices.list compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list compute.targetTcpProxies.delete compute.targetTcpProxies.list Example 2.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 2.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 2.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 2.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list compute.regionHealthChecks.delete compute.regionHealthChecks.list Example 2.21. Required Images permissions for deletion compute.images.list 2.5.3. Required GCP permissions for shared VPC installations When you are installing a cluster to a shared VPC , you must configure the service account for both the host project and the service project. If you are not installing to a shared VPC, you can skip this section. You must apply the minimum roles required for a standard installation as listed above, to the service project. Important You can use granular permissions for a Cloud Credential Operator that operates in either manual or mint credentials mode. You cannot use granular permissions in passthrough credentials mode. Ensure that the host project applies one of the following configurations to the service account: Example 2.22. Required permissions for creating firewalls in the host project projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkAdmin roles/compute.securityAdmin Example 2.23. Required permissions for deleting firewalls in the host project compute.firewalls.delete compute.networks.updatePolicy Example 2.24. Required minimal permissions projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkUser If you do not supply a service account for control plane nodes in the install-config.yaml file, please grant the below permissions to the service account in the host project. If you do not supply a service account for compute nodes in the install-config.yaml file, please grant the below permissions to the service account in the host project for cluster destruction. resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 2.5.4. Required GCP permissions for user-provided service accounts When you are installing a cluster, the compute and control plane nodes require their own service accounts. By default, the installation program creates a service account for the control plane and compute nodes. The service account that the installation program uses requires the roles and permissions that are listed in the Creating a service account in GCP section, as well as the resourcemanager.projects.getIamPolicy and resourcemanager.projects.setIamPolicy permissions. These permissions should be applied to the service account in the host project. If this approach does not meet the security requirements of your organization, you can provide a service account email address for the control plane or compute nodes in the install-config.yaml file. For more information, see the Installation configuration parameters for GCP page. If you provide a service account for control plane nodes during an installation into a shared VPC, you must grant that service account the roles/compute.networkUser role in the host project. If you want the installation program to automatically create firewall rules when you supply the control plane service account, you must grant that service account the roles/compute.networkAdmin and roles/compute.securityAdmin roles in the host project. If you only supply the roles/compute.networkUser role, you must create the firewall rules manually. Important The following roles are required for user-provided service accounts for control plane and compute nodes respectively. Example 2.25. Required roles for control plane nodes roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin Example 2.26. Required roles for compute nodes roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: africa-south1 (Johannesburg, South Africa) asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-central2 (Dammam, Saudi Arabia, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_gcp/installing-gcp-account
11.16. Recommended Configurations - Dispersed Volume
11.16. Recommended Configurations - Dispersed Volume This chapter describes the recommended configurations, examples, and illustrations for Dispersed and Distributed Dispersed volumes. For a Distributed Dispersed volume, there will be multiple sets of bricks (subvolumes) that stores data with erasure coding. All the files are distributed over these sets of erasure coded subvolumes. In this scenario, even if a redundant number of bricks is lost from every dispersed subvolume, there is no data loss. For example, assume you have Distributed Dispersed volume of configuration 2 X (4 + 2). Here, you have two sets of dispersed subvolumes where the data is erasure coded between 6 bricks with 2 bricks for redundancy. The files will be stored in one of these dispersed subvolumes. Therefore, even if we lose two bricks from each set, there is no data loss. Brick Configurations The following table lists the brick layout details of multiple server/disk configurations for dispersed and distributed dispersed volumes. Table 11.3. Brick Configurations for Dispersed and Distributed Dispersed Volumes Redundancy Level Supported Configurations Bricks per Server per Subvolume Node Loss Max brick failure count within a subvolume Compatible Server Node count Increment Size (no. of nodes) Min number of sub-volumes Total Spindles Tolerated HDD Failure Percentage 12 HDD Chassis 2 4 + 2 2 1 2 3 3 6 36 33.33% 1 2 2 6 6 12 72 33.33% 2 8+2 2 1 2 5 5 6 60 20.00% 1 2 2 10 10 12 120 20.00% 3 8 + 3 1-2 1 3 6 6 6 72 25.00% 4 8 + 4 4 1 4 3 3 3 36 33.33% 2 2 4 6 6 6 72 33.33% 1 4 4 12 12 12 144 33.33% 4 16 + 4 4 1 4 5 5 3 60 20.00% 2 2 4 10 10 6 120 20.00% 1 4 4 20 20 12 240 20.00% 24 HDD Chassis 2 4 + 2 2 1 2 3 3 12 72 33.33% 1 2 2 6 6 24 144 33.33% 2 8+ 2 2 1 2 5 5 12 120 20.00% 1 2 2 10 10 24 240 20.00% 4 8 + 4 4 1 4 3 3 6 72 33.33% 2 2 4 6 6 12 144 33.33% 1 4 4 12 12 24 288 33.33% 4 16 + 4 4 1 4 5 5 6 120 20.00% 2 2 4 10 10 12 240 20.00% 1 4 4 20 20 24 480 20.00% 36 HDD Chassis 2 4 + 2 2 1 2 3 3 18 108 33.33% 1 2 2 6 6 36 216 33.33% 2 8 + 2 2 1 1 5 5 18 180 20.00% 1 2 2 10 10 36 360 20.00% 3 8 + 3 1-2 1 3 6 6 19 216 26.39% 4 8 + 4 4 1 4 3 3 9 108 33.33% 2 2 4 6 6 18 216 33.33% 1 4 4 12 12 36 432 33.33% 4 16 + 4 4 1 4 5 5 9 180 20.00% 2 2 4 10 10 18 360 20.00% 1 4 4 20 20 36 720 20.00% 60 HDD Chassis 2 4 + 2 2 1 2 3 3 30 180 33.33% 1 2 2 6 6 60 360 33.33% 2 8 + 2 2 1 2 5 5 30 300 20.00% 1 2 2 10 10 60 600 20.00% 3 8 + 3 1-2 1 3 6 6 32 360 26.67% 4 8 + 4 4 1 4 3 3 15 180 33.33% 2 2 4 6 6 30 360 33.33% 1 4 4 12 12 60 720 33.33% 4 16 + 4 4 1 4 5 5 15 300 20.00% 2 2 4 10 10 30 600 20.00% 1 4 4 20 20 60 1200 20.00% Example 1 - Dispersed 4+2 configuration on three servers This example describes a compact configuration of three servers, with each server attached to a 12 HDD chassis to create a dispersed volume. In this example, each HDD is assumed to contain a single brick. This example's brick configuration is explained in row 1 of Table 11.3, "Brick Configurations for Dispersed and Distributed Dispersed Volumes" . With this server-to-spindle ratio, 36 disks/spindles are allocated for the dispersed volume configuration. For example, to create a compact 4+2 dispersed volume using 6 spindles from the total disk pool over three servers, run the following command: Note that the --force parameter is required because this configuration is not optimal in terms of fault tolerance. Since each server provides two bricks, this configuration has a greater risk to data availability if a server goes offline than it would if each brick was provided by a separate server. Run the gluster volume info command to view the volume information. Additionally, you can convert the dispersed volume to a distributed dispersed volume in increments of 4+2. Add six bricks from the disk pool using the following command: Run the gluster volume info command to view distributed dispersed volume information. Using this configuration example, you can create configuration combinations of 6 x (4 + 2) distributed dispersed volumes. This example configuration has tolerance up to 12 brick failures. For details about creating an optimal configuration, see Section 5.8, "Creating Dispersed Volumes" . Example 2 - Dispersed 8+4 configuration on three servers The following diagram illustrates a dispersed 8+4 configuration on three servers as explained in the row 3 of Table 11.3, "Brick Configurations for Dispersed and Distributed Dispersed Volumes" The command to create the disperse volume for this configuration: Note that the --force parameter is required because this configuration is not optimal in terms of fault tolerance. Since each server provides more than one brick, this configuration has a greater risk to data availability if a server goes offline than it would if each brick was provided by a separate server. For details about creating an optimal configuration, see Section 5.8, "Creating Dispersed Volumes" . Figure 11.1. Example Configuration of 8+4 Dispersed Volume Configuration In this example, there are m bricks (refer to section Section 5.8, "Creating Dispersed Volumes" for information on n = k+m equation) from a dispersed subvolume on each server. If you add more than m bricks from a dispersed subvolume on server S, and if the server S goes down, data will be unavailable. If S (a single column in the above diagram) goes down, there is no data loss, but if there is any additional hardware failure, either another node going down or a storage device failure, there would be immediate data loss. Example 3 - Dispersed 4+2 configuration on six servers The following diagram illustrates dispersed 4+2 configuration on six servers and each server with 12-disk-per-server configuration as explained in the row 2 of Table 11.3, "Brick Configurations for Dispersed and Distributed Dispersed Volumes" . The command to create the disperse volume for this configuration: Figure 11.2. Example Configuration of 4+2 Dispersed Volume Configuration Redundancy Comparison The following chart illustrates the redundancy comparison of all supported dispersed volume configurations. Figure 11.3. Illustration of the redundancy comparison
[ "gluster volume create test_vol disperse-data 4 redundancy 2 transport tcp server1:/rhgs/brick1 server1:/rhgs/brick2 server2:/rhgs/brick3 server2:/rhgs/brick4 server3:/rhgs/brick5 server3:/rhgs/brick6 --force", "gluster volume info test-volume Volume Name: test-volume Type: Disperse Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server1:/rhgs/brick2 Brick3: server2:/rhgs/brick3 Brick4: server2:/rhgs/brick4 Brick5: server3:/rhgs/brick5 Brick6: server3:/rhgs/brick6", "gluster volume add-brick test_vol server1:/rhgs/brick7 server1:/rhgs/brick8 server2:/rhgs/brick9 server2:/rhgs/brick10 server3:/rhgs/brick11 server3:/rhgs/brick12", "gluster volume info test-volume Volume Name: test-volume Type: Distributed-Disperse Status: Started Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server1:/rhgs/brick2 Brick3: server2:/rhgs/brick3 Brick4: server2:/rhgs/brick4 Brick5: server3:/rhgs/brick5 Brick6: server3:/rhgs/brick6 Brick7: server1:/rhgs/brick7 Brick8: server1:/rhgs/brick8 Brick9: server2:/rhgs/brick9 Brick10: server2:/rhgs/brick10 Brick11: server3:/rhgs/brick11 Brick12: server3:/rhgs/brick12", "gluster volume create test_vol disperse-data 8 redundancy 4 transport tcp server1:/rhgs/brick1 server1:/rhgs/brick2 server1:/rhgs/brick3 server1:/rhgs/brick4 server2:/rhgs/brick1 server2:/rhgs/brick2 server2:/rhgs/brick3 server2:/rhgs/brick4 server3:/rhgs/brick1 server3:/rhgs/brick2 server3:/rhgs/brick3 server3:/rhgs/brick4 server1:/rhgs/brick5 server1:/rhgs/brick6 server1:/rhgs/brick7 server1:/rhgs/brick8 server2:/rhgs/brick5 server2:/rhgs/brick6 server2:/rhgs/brick7 server2:/rhgs/brick8 server3:/rhgs/brick5 server3:/rhgs/brick6 server3:/rhgs/brick7 server3:/rhgs/brick8 server1:/rhgs/brick9 server1:/rhgs/brick10 server1:/rhgs/brick11 server1:/rhgs/brick12 server2:/rhgs/brick9 server2:/rhgs/brick10 server2:/rhgs/brick11 server2:/rhgs/brick12 server3:/rhgs/brick9 server3:/rhgs/brick10 server3:/rhgs/brick11 server3:/rhgs/brick12 --force", "gluster volume create test_vol disperse-data 4 redundancy 2 transport tcp server1:/rhgs/brick1 server2:/rhgs/brick1 server3:/rhgs/brick1 server4:/rhgs/brick1 server5:/rhgs/brick1 server6:/rhgs/brick1server1:/rhgs/brick2 server2:/rhgs/brick2 server3:/rhgs/brick2 server4:/rhgs/brick2 server5:/rhgs/brick2 server6:/rhgs/brick2 server1:/rhgs/brick3 server2:/rhgs/brick3 server3:/rhgs/brick3 server4:/rhgs/brick3 server5:/rhgs/brick3 server6:/rhgs/brick3 server1:/rhgs/brick4 server2:/rhgs/brick4 server3:/rhgs/brick4 server4:/rhgs/brick4 server5:/rhgs/brick4 server6:/rhgs/brick4 server1:/rhgs/brick5 server2:/rhgs/brick5 server3:/rhgs/brick5 server4:/rhgs/brick5 server5:/rhgs/brick5 server6:/rhgs/brick5 server1:/rhgs/brick6 server2:/rhgs/brick6 server3:/rhgs/brick6 server4:/rhgs/brick6 server5:/rhgs/brick6 server6:/rhgs/brick6 server1:/rhgs/brick7 server2:/rhgs/brick7 server3:/rhgs/brick7 server4:/rhgs/brick7 server5:/rhgs/brick7 server6:/rhgs/brick7 server1:/rhgs/brick8 server2:/rhgs/brick8 server3:/rhgs/brick8 server4:/rhgs/brick8 server5:/rhgs/brick8 server6:/rhgs/brick8 server1:/rhgs/brick9 server2:/rhgs/brick9 server3:/rhgs/brick9 server4:/rhgs/brick9 server5:/rhgs/brick9 server6:/rhgs/brick9 server1:/rhgs/brick10 server2:/rhgs/brick10 server3:/rhgs/brick10 server4:/rhgs/brick10 server5:/rhgs/brick10 server6:/rhgs/brick10 server1:/rhgs/brick11 server2:/rhgs/brick11 server3:/rhgs/brick11 server4:/rhgs/brick11 server5:/rhgs/brick11 server6:/rhgs/brick11 server1:/rhgs/brick12 server2:/rhgs/brick12 server3:/rhgs/brick12 server4:/rhgs/brick12 server5:/rhgs/brick12 server6:/rhgs/brick12" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-recommended-configuration_dispersed
Chapter 14. Configuring Multi-Site Clusters with Pacemaker
Chapter 14. Configuring Multi-Site Clusters with Pacemaker When a cluster spans more than one site, issues with network connectivity between the sites can lead to split-brain situations. When connectivity drops, there is no way for a node on one site to determine whether a node on another site has failed or is still functioning with a failed site interlink. In addition, it can be problematic to provide high availability services across two sites which are too far apart to keep synchronous. To address these issues, Red Hat Enterprise Linux release 7.4 provides full support for the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. The Booth ticket manager is a distributed service that is meant to be run on a different physical network than the networks that connect the cluster nodes at particular sites. It yields another, loose cluster, a Booth formation , that sits on top of the regular clusters at the sites. This aggregated communication layer facilitates consensus-based decision processes for individual Booth tickets. A Booth ticket is a singleton in the Booth formation and represents a time-sensitive, movable unit of authorization. Resources can be configured to require a certain ticket to run. This can ensure that resources are run at only one site at a time, for which a ticket or tickets have been granted. You can think of a Booth formation as an overlay cluster consisting of clusters running at different sites, where all the original clusters are independent of each other. It is the Booth service which communicates to the clusters whether they have been granted a ticket, and it is Pacemaker that determines whether to run resources in a cluster based on a Pacemaker ticket constraint. This means that when using the ticket manager, each of the clusters can run its own resources as well as shared resources. For example there can be resources A, B and C running only in one cluster, resources D, E, and F running only in the other cluster, and resources G and H running in either of the two clusters as determined by a ticket. It is also possible to have an additional resource J that could run in either of the two clusters as determined by a separate ticket. The following procedure provides an outline of the steps you follow to configure a multi-site configuration that uses the Booth ticket manager. These example commands use the following arrangement: Cluster 1 consists of the nodes cluster1-node1 and cluster1-node2 Cluster 1 has a floating IP address assigned to it of 192.168.11.100 Cluster 2 consists of cluster2-node1 and cluster2-node2 Cluster 2 has a floating IP address assigned to it of 192.168.22.100 The arbitrator node is arbitrator-node with an ip address of 192.168.99.100 The name of the Booth ticket that this configuration uses is apacheticket These example commands assume that the cluster resources for an Apache service have been configured as part of the resource group apachegroup for each cluster. It is not required that the resources and resource groups be the same on each cluster to configure a ticket constraint for those resources, since the Pacemaker instance for each cluster is independent, but that is a common failover scenario. For a full cluster configuration procedure that configures an Apache service in a cluster, see the example in High Availability Add-On Administration . Note that at any time in the configuration procedure you can enter the pcs booth config command to display the booth configuration for the current node or cluster or the pcs booth status command to display the current status of booth on the local node. Install the booth-site Booth ticket manager package on each node of both clusters. Install the pcs , booth-core , and booth-arbitrator packages on the arbitrator node. Ensure that ports 9929/tcp and 9929/udp are open on all cluster nodes and on the arbitrator node. For example, running the following commands on all nodes in both clusters as well as on the arbitrator node allows access to ports 9929/tcp and 9929/udp on those nodes. Note that this procedure in itself allows any machine anywhere to access port 9929 on the nodes. You should ensure that on your site the nodes are open only to the nodes that require them. Create a Booth configuration on one node of one cluster. The addresses you specify for each cluster and for the arbitrator must be IP addresses. For each cluster, you specify a floating IP address. This command creates the configuration files /etc/booth/booth.conf and /etc/booth/booth.key on the node from which it is run. Create a ticket for the Booth configuration. This is the ticket that you will use to define the resource constraint that will allow resources to run only when this ticket has been granted to the cluster. This basic failover configuration procedure uses only one ticket, but you can create additional tickets for more complicated scenarios where each ticket is associated with a different resource or resources. Synchronize the Booth configuration to all nodes in the current cluster. From the arbitrator node, pull the Booth configuration to the arbitrator. If you have not previously done so, you must first authenticate pcs to the node from which you are pulling the configuration. Pull the Booth configuration to the other cluster and synchronize to all the nodes of that cluster. As with the arbitrator node, if you have not previously done so, you must first authenticate pcs to the node from which you are pulling the configuration. Start and enable Booth on the arbitrator. Note You must not manually start or enable Booth on any of the nodes of the clusters since Booth runs as a Pacemaker resource in those clusters. Configure Booth to run as a cluster resource on both cluster sites. This creates a resource group with booth-ip and booth-service as members of that group. Add a ticket constraint to the resource group you have defined for each cluster. You can enter the following command to display the currently configured ticket constraints. Grant the ticket you created for this setup to the first cluster. Note that it is not necessary to have defined ticket constraints before granting a ticket. Once you have initially granted a ticket to a cluster, then Booth takes over ticket management unless you override this manually with the pcs booth ticket revoke command. For information on the pcs booth administration commands, see the PCS help screen for the pcs booth command. It is possible to add or remove tickets at any time, even after completing this procedure. After adding or removing a ticket, however, you must synchronize the configuration files to the other nodes and clusters as well as to the arbitrator and grant the ticket as is shown in this procedure. For information on additional Booth administration commands that you can use for cleaning up and removing Booth configuration files, tickets, and resources, see the PCS help screen for the pcs booth command.
[ "yum install -y booth-site yum install -y booth-site yum install -y booth-site yum install -y booth-site", "yum install -y pcs booth-core booth-arbitrator", "firewall-cmd --add-port=9929/udp firewall-cmd --add-port=9929/tcp firewall-cmd --add-port=9929/udp --permanent firewall-cmd --add-port=9929/tcp --permanent", "pcs booth setup sites 192.168.11.100 192.168.22.100 arbitrators 192.168.99.100", "pcs booth ticket add apacheticket", "pcs booth sync", "pcs cluster auth cluster1-node1 pcs booth pull cluster1-node1", "pcs cluster auth cluster1-node1 pcs booth pull cluster1-node1 pcs booth sync", "pcs booth start pcs booth enable", "pcs booth create ip 192.168.11.100 pcs booth create ip 192.168.22.100", "pcs constraint ticket add apacheticket apachegroup pcs constraint ticket add apacheticket apachegroup", "pcs constraint ticket [show]", "pcs booth ticket grant apacheticket" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-multisite-HAAR
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/making-open-source-more-inclusive
Chapter 19. Clusters at the network far edge
Chapter 19. Clusters at the network far edge 19.1. Challenges of the network far edge Edge computing presents complex challenges when managing many sites in geographically displaced locations. Use zero touch provisioning (ZTP) and GitOps to provision and manage sites at the far edge of the network. 19.1.1. Overcoming the challenges of the network far edge Today, service providers want to deploy their infrastructure at the edge of the network. This presents significant challenges: How do you handle deployments of many edge sites in parallel? What happens when you need to deploy sites in disconnected environments? How do you manage the lifecycle of large fleets of clusters? Zero touch provisioning (ZTP) and GitOps meets these challenges by allowing you to provision remote edge sites at scale with declarative site definitions and configurations for bare-metal equipment. Template or overlay configurations install OpenShift Container Platform features that are required for CNF workloads. The full lifecycle of installation and upgrades is handled through the ZTP pipeline. ZTP uses GitOps for infrastructure deployments. With GitOps, you use declarative YAML files and other defined patterns stored in Git repositories. Red Hat Advanced Cluster Management (RHACM) uses your Git repositories to drive the deployment of your infrastructure. GitOps provides traceability, role-based access control (RBAC), and a single source of truth for the desired state of each site. Scalability issues are addressed by Git methodologies and event driven operations through webhooks. You start the ZTP workflow by creating declarative site definition and configuration custom resources (CRs) that the ZTP pipeline delivers to the edge nodes. The following diagram shows how ZTP works within the far edge framework. 19.1.2. Using ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management (RHACM) manages clusters in a hub-and-spoke architecture, where a single hub cluster manages many spoke clusters. Hub clusters running RHACM provision and deploy the managed clusters by using zero touch provisioning (ZTP) and the assisted service that is deployed when you install RHACM. The assisted service handles provisioning of OpenShift Container Platform on single node clusters, three-node clusters, or standard clusters running on bare metal. A high-level overview of using ZTP to provision and maintain bare-metal hosts with OpenShift Container Platform is as follows: A hub cluster running RHACM manages an OpenShift image registry that mirrors the OpenShift Container Platform release images. RHACM uses the OpenShift image registry to provision the managed clusters. You manage the bare-metal hosts in a YAML format inventory file, versioned in a Git repository. You make the hosts ready for provisioning as managed clusters, and use RHACM and the assisted service to install the bare-metal hosts on site. Installing and deploying the clusters is a two-stage process, involving an initial installation phase, and a subsequent configuration phase. The following diagram illustrates this workflow: 19.1.3. Installing managed clusters with SiteConfig resources and RHACM GitOps ZTP uses SiteConfig custom resources (CRs) in a Git repository to manage the processes that install OpenShift Container Platform clusters. The SiteConfig CR contains cluster-specific parameters required for installation. It has options for applying select configuration CRs during installation including user defined extra manifests. The ZTP GitOps plugin processes SiteConfig CRs to generate a collection of CRs on the hub cluster. This triggers the assisted service in Red Hat Advanced Cluster Management (RHACM) to install OpenShift Container Platform on the bare-metal host. You can find installation status and error messages in these CRs on the hub cluster. You can provision single clusters manually or in batches with ZTP: Provisioning a single cluster Create a single SiteConfig CR and related installation and configuration CRs for the cluster, and apply them in the hub cluster to begin cluster provisioning. This is a good way to test your CRs before deploying on a larger scale. Provisioning many clusters Install managed clusters in batches of up to 400 by defining SiteConfig and related CRs in a Git repository. ArgoCD uses the SiteConfig CRs to deploy the sites. The RHACM policy generator creates the manifests and applies them to the hub cluster. This starts the cluster provisioning process. 19.1.4. Configuring managed clusters with policies and PolicyGenTemplate resources Zero touch provisioning (ZTP) uses Red Hat Advanced Cluster Management (RHACM) to configure clusters by using a policy-based governance approach to applying the configuration. The policy generator or PolicyGen is a plugin for the GitOps Operator that enables the creation of RHACM policies from a concise template. The tool can combine multiple CRs into a single policy, and you can generate multiple policies that apply to various subsets of clusters in your fleet. Note For scalability and to reduce the complexity of managing configurations across the fleet of clusters, use configuration CRs with as much commonality as possible. Where possible, apply configuration CRs using a fleet-wide common policy. The preference is to create logical groupings of clusters to manage as much of the remaining configurations as possible under a group policy. When a configuration is unique to an individual site, use RHACM templating on the hub cluster to inject the site-specific data into a common or group policy. Alternatively, apply an individual site policy for the site. The following diagram shows how the policy generator interacts with GitOps and RHACM in the configuration phase of cluster deployment. For large fleets of clusters, it is typical for there to be a high-level of consistency in the configuration of those clusters. The following recommended structuring of policies combines configuration CRs to meet several goals: Describe common configurations once and apply to the fleet. Minimize the number of maintained and managed policies. Support flexibility in common configurations for cluster variants. Table 19.1. Recommended PolicyGenTemplate policy categories Policy category Description Common A policy that exists in the common category is applied to all clusters in the fleet. Use common PolicyGenTemplate CRs to apply common installation settings across all cluster types. Groups A policy that exists in the groups category is applied to a group of clusters in the fleet. Use group PolicyGenTemplate CRs to manage specific aspects of single-node, three-node, and standard cluster installations. Cluster groups can also follow geographic region, hardware variant, etc. Sites A policy that exists in the sites category is applied to a specific cluster site. Any cluster can have its own specific policies maintained. Additional resources For more information about extracting the reference SiteConfig and PolicyGenTemplate CRs from the ztp-site-generate container image, see Preparing the ZTP Git repository . 19.2. Preparing the hub cluster for ZTP To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts. 19.2.1. Telco RAN 4.12 validated solution software versions The Red Hat Telco Radio Access Network (RAN) version 4.12 solution has been validated using the following Red Hat software products. Table 19.2. Telco RAN 4.12 validated solution software Product Software version Hub cluster OpenShift Container Platform version 4.12 GitOps ZTP plugin 4.10, 4.11, or 4.12 Red Hat Advanced Cluster Management (RHACM) 2.6, 2.7 Red Hat OpenShift GitOps 1.9, 1.10 Topology Aware Lifecycle Manager (TALM) 4.10, 4.11, or 4.12 19.2.2. Installing GitOps ZTP in a disconnected environment Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured a disconnected mirror registry for use in the cluster. Note The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry. Procedure Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment . Install GitOps and TALM in the hub cluster. Additional resources Installing OpenShift GitOps Installing TALM Mirroring an Operator catalog 19.2.3. Adding RHCOS ISO and RootFS images to the disconnected mirror host Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images. Prerequisites Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type. Procedure Log in to the mirror host. Obtain the RHCOS ISO and RootFS images from mirror.openshift.com , for example: Export the required image names and OpenShift Container Platform version as environment variables: USD export ISO_IMAGE_NAME=<iso_image_name> 1 USD export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1 USD export OCP_VERSION=<ocp_version> 1 1 ISO image name, for example, rhcos-4.12.1-x86_64-live.x86_64.iso 1 RootFS image name, for example, rhcos-4.12.1-x86_64-live-rootfs.x86_64.img 1 OpenShift Container Platform version, for example, 4.12.1 Download the required images: USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME} USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME} Verification steps Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example: USD wget http://USD(hostname)/USD{ISO_IMAGE_NAME} Example output Saving to: rhcos-4.12.1-x86_64-live.x86_64.iso rhcos-4.12.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s Additional resources Creating a mirror registry Mirroring images for a disconnected installation 19.2.4. Enabling the assisted service Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning resource to watch all namespaces and to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have RHACM with MultiClusterHub enabled. Procedure Enable the Provisioning resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the Central Infrastructure Management service . Update the AgentServiceConfig CR by running the following command: USD oc edit AgentServiceConfig Add the following entry to the items.spec.osImages field in the CR: - cpuArchitecture: x86_64 openshiftVersion: "4.12" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso where: <host> Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server. <path> Is the path to the image on the target mirror registry. Save and quit the editor to apply the changes. 19.2.5. Configuring the hub cluster to use a disconnected mirror registry You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment. Prerequisites You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.7 installed. You have hosted the rootfs and iso images on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository . Warning If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Procedure Create a ConfigMap containing the mirror registry config: apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: <certificate> 2 registries.conf: | 3 unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] location = <mirror_registry_url> 4 insecure = false mirror-by-digest-only = true 1 The ConfigMap namespace must be set to multicluster-engine . 2 The mirror registry's certificate used when creating the mirror registry. 3 The configuration file for the mirror registry. The mirror registry configuration adds mirror information to /etc/containers/registries.conf in the Discovery image. The mirror information is stored in the imageContentSources section of the install-config.yaml file when passed to the installation program. The Assisted Service pod running on the HUB cluster fetches the container images from the configured mirror registry. 4 The URL of the mirror registry. You must use the URL from the imageContentSources section by running the oc adm release mirror command when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section. This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below: Example output apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> url: <iso_url> 3 1 Set the AgentServiceConfig namespace to multicluster-engine to match the ConfigMap namespace 2 Set mirrorRegistryRef.name to match the definition specified in the related ConfigMap CR 3 Set the URL for the ISO hosted on the httpd server Important A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network. Additional resources Mirroring the OpenShift Container Platform image repository 19.2.6. Configuring the hub cluster to use unauthenticated registries You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images. Prerequisites You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster. You have installed the OpenShift Container Platform CLI (oc). You have logged in as a user with cluster-admin privileges. You have configured an unauthenticated registry for use with the hub cluster. Procedure Update the AgentServiceConfig custom resource (CR) by running the following command: USD oc edit AgentServiceConfig agent Add the unauthenticatedRegistries field in the CR: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com ... Unauthenticated registries are listed under spec.unauthenticatedRegistries in the AgentServiceConfig resource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation. assisted-service validates the pull secret by making sure it contains the authentication information for every image registry used for installation. Note Mirror registries are automatically added to the ignore list and do not need to be added under spec.unauthenticatedRegistries . Specifying the PUBLIC_CONTAINER_REGISTRIES environment variable in the ConfigMap overrides the default values with the specified value. The PUBLIC_CONTAINER_REGISTRIES defaults are quay.io and registry.svc.ci.openshift.org . Verification Verify that you can access the newly added registry from the hub cluster by running the following commands: Open a debug shell prompt to the hub cluster: USD oc debug node/<node_name> Test access to the unauthenticated registry by running the following command: sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry> where: <unauthenticated_registry> Is the new registry, for example, unauthenticated-image-registry.openshift-image-registry.svc:5000 . Example output Login Succeeded! 19.2.7. Configuring the hub cluster with ArgoCD You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps zero touch provisioning (ZTP). Note Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs. Prerequisites You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed. You have extracted the reference deployment from the ZTP GitOps plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the out/argocd/deployment directory referenced in the following procedure. Procedure Prepare the ArgoCD pipeline configuration: Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository". Configure access to the repository using the ArgoCD UI. Under Settings configure the following: Repositories - Add the connection information. The URL must end in .git , for example, https://repo.example.com/repo.git and credentials. Certificates - Add the public certificate for the repository, if needed. Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml and out/argocd/deployment/policies-app.yaml , based on your Git repository: Update the URL to point to the Git repository. The URL ends with .git , for example, https://repo.example.com/repo.git . The targetRevision indicates which Git repository branch to monitor. path specifies the path to the SiteConfig and PolicyGenTemplate CRs, respectively. To install the ZTP GitOps plugin you must patch the ArgoCD instance in the hub cluster by using the patch file previously extracted into the out/argocd/deployment/ directory. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json Note For a disconnected environment, amend the out/argocd/deployment/argocd-openshift-gitops-patch.json file with the ztp-site-generate image mirrored in your local registry. Run the following command: USD oc patch argocd openshift-gitops -n openshift-gitops --type='json' \ -p='[{"op": "replace", "path": "/spec/repo/initContainers/0/image", \ "value": "<local_registry>/<ztp_site_generate_image_ref>"}]' where: <local_registry> Is the URL of the disconnected registry, for example, my.local.registry:5000 <ztp-site-generate-image-ref> Is the path to the mirrored ztp-site-generate image in the local registry, for example openshift4-ztp-site-generate:custom . In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. To disable this feature, apply the following patch to disable and remove the relevant hub cluster and managed cluster pods that are responsible for this add-on. USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by using the following command: USD oc apply -k out/argocd/deployment 19.2.8. Preparing the GitOps ZTP site configuration repository Before you can use the ZTP GitOps pipeline, you need to prepare the Git repository to host the site configuration data. Prerequisites You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs). You have deployed the managed clusters using zero touch provisioning (ZTP). Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs. Export the argocd directory from the ztp-site-generate container image using the following commands: USD podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 USD mkdir -p ./out USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./out Check that the out directory contains the following subdirectories: out/extra-manifest contains the source CR files that SiteConfig uses to generate extra manifest configMap . out/source-crs contains the source CR files that PolicyGenTemplate uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. out/argocd/deployment contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. out/argocd/example contains the examples for SiteConfig and PolicyGenTemplate files that represent the recommended configuration. The directory structure under out/argocd/example serves as a reference for the structure and content of your Git repository. The example includes SiteConfig and PolicyGenTemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using. The following example describes a set of CRs for a network of single-node clusters: example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml Keep SiteConfig and PolicyGenTemplate CRs in separate directories. Both the SiteConfig and PolicyGenTemplate directories must contain a kustomization.yaml file that explicitly includes the files in that directory. This directory structure and the kustomization.yaml files must be committed and pushed to your Git repository. The initial push to Git should include the kustomization.yaml files. The SiteConfig ( example-sno.yaml ) and PolicyGenTemplate ( common-ranGen.yaml , group-du-sno*.yaml , and example-sno-site.yaml ) files can be omitted and pushed at a later time as required when deploying a site. The KlusterletAddonConfigOverride.yaml file is only required if one or more SiteConfig CRs which make reference to it are committed and pushed to Git. See example-sno.yaml for an example of how this is used. 19.3. Installing managed clusters with RHACM and SiteConfig resources You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The zero touch priovisioning (ZTP) pipeline performs the cluster installations. ZTP can be used in a disconnected environment. 19.3.1. GitOps ZTP and Topology Aware Lifecycle Manager GitOps zero touch provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM. Inform policies By default, GitOps ZTP creates all policies with a remediation action of inform . These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the ZTP process, after OpenShift installation, the TALM steps through the created inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. Automatic creation of ClusterGroupUpgrade CRs To automate the initial configuration of newly deployed clusters, TALM monitors the state of all ManagedCluster CRs on the hub cluster. Any ManagedCluster CR that does not have a ztp-done label applied, including newly created ManagedCluster CRs, causes the TALM to automatically create a ClusterGroupUpgrade CR with the following characteristics: The ClusterGroupUpgrade CR is created and enabled in the ztp-install namespace. ClusterGroupUpgrade CR has the same name as the ManagedCluster CR. The cluster selector includes only the cluster associated with that ManagedCluster CR. The set of managed policies includes all policies that RHACM has bound to the cluster at the time the ClusterGroupUpgrade is created. Pre-caching is disabled. Timeout set to 4 hours (240 minutes). The automatic creation of an enabled ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a ClusterGroupUpgrade CR for any ManagedCluster without the ztp-done label allows a failed ZTP installation to be restarted by simply deleting the ClusterGroupUpgrade CR for the cluster. Waves Each policy generated from a PolicyGenTemplate CR includes a ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated ClusterGroupUpgrade CR. Note All CRs in the same policy must have the same setting for the ztp-deploy-wave annotation. The default value of this annotation for each CR can be overridden in the PolicyGenTemplate . The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime. The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account. Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves. To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image: USD grep -r "ztp-deploy-wave" out/source-crs Phase labels The ClusterGroupUpgrade CR is automatically created and includes directives to annotate the ManagedCluster CR with labels at the start and end of the ZTP process. When ZTP configuration postinstallation commences, the ManagedCluster has the ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the ztp-running label and apply the ztp-done label. For deployments that make use of the informDuValidator policy, the ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the ZTP applied configuration CRs. The ztp-done label affects automatic ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial ZTP installation of the cluster. Linked CRs The automatically created ClusterGroupUpgrade CR has the owner reference set as the ManagedCluster from which it was derived. This reference ensures that deleting the ManagedCluster CR causes the instance of the ClusterGroupUpgrade to be deleted along with any supporting resources. 19.3.2. Overview of deploying managed clusters with ZTP Red Hat Advanced Cluster Management (RHACM) uses zero touch provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters. The deployment of the clusters includes: Installing the host operating system (RHCOS) on a blank server Deploying OpenShift Container Platform Creating cluster policies and site subscriptions Making the necessary network configurations to the server operating system Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV Overview of the managed site installation process After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target host. When the ISO file successfully boots on the target host it reports the host hardware information to RHACM. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster. Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads . 19.3.3. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the ZTP pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 19.3.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP The GitOps ZTP workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.12, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create the InfraEnv CR and edit the spec.kernelArguments specification to configure kernel arguments. Save the following YAML in an InfraEnv-example.yaml file: Note The InfraEnv CR in this example uses template syntax such as {{ .Cluster.ClusterName }} that is populated based on values in the SiteConfig CR. The SiteConfig CR automatically populates values for these templates during deployment. Do not edit the templates manually. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}" 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Commit the InfraEnv-example.yaml CR to the same location in your Git repository that has the SiteConfig CR and push your changes. The following example shows a sample Git repository structure: ~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ... Edit the spec.clusters.crTemplates specification in the SiteConfig CR to reference the InfraEnv-example.yaml CR in your Git repository: clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml" When you are ready to deploy your cluster by committing and pushing the SiteConfig CR, the build pipeline uses the custom InfraEnv-example CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 19.3.5. Deploying a managed cluster with SiteConfig and ZTP Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the zero touch provisioning (ZTP) cluster deployment. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information. Note When you create the source repository, ensure that you patch the ArgoCD application with the argocd/deployment/argocd-openshift-gitops-patch.json patch-file that you extract from the ztp-site-generate container. See "Configuring the hub cluster with ArgoCD". To be ready for provisioning managed clusters, you require the following for each bare-metal host: Network connectivity Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host. Baseboard Management Controller (BMC) details ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the ManagedCluster CRs on the hub cluster based on the SiteConfig CR in your site Git repo. You create individual BMCSecret CRs for each host manually. Procedure Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in out/argocd/example/siteconfig/example-sno.yaml , the cluster name and namespace is example-sno . Export the cluster namespace by running the following command: USD export CLUSTERNS=example-sno Create the namespace: USD oc create namespace USDCLUSTERNS Create pull secret and BMC Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information. Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. Create a SiteConfig CR for your cluster in your local clone of the Git repository: Choose the appropriate example for your CR from the out/argocd/example/siteconfig/ folder. The folder includes example files for single node, three-node, and standard clusters: example-sno.yaml example-3node.yaml example-standard.yaml Change the cluster and host details in the example file to match the type of cluster you want. For example: # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA... " clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at /var/lib/containers with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": " Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 + Note For more information about BMC addressing, see the "Additional resources" section. The installConfigOverrides and ignitionConfigOverride fields are expanded in the example for ease of readability. You can inspect the default set of extra-manifest MachineConfig CRs in out/argocd/extra-manifest . It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, sno-extra-manifest/ , and add your custom manifest CRs to this directory. If your SiteConfig.yaml refers to this directory in the extraManifestPath field, any CRs in this referenced directory are appended to the default set of extra manifests. Add the SiteConfig CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/siteconfig/kustomization.yaml . Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Additional resources Preparing the GitOps ZTP site configuration repository Configuring the hub cluster with ArgoCD Signalling ZTP cluster deployment completion with validator inform policies Creating the managed bare-metal host secrets BMC addressing 19.3.5.1. Single-node OpenShift SiteConfig CR installation reference Table 19.3. SiteConfig CR installation options for single-node OpenShift clusters SiteConfig CR field Description metadata.name Set name to assisted-deployment-pull-secret and create the assisted-deployment-pull-secret CR in the same namespace as the SiteConfig CR. spec.clusterImageSetNameRef Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . installConfigOverrides Set the installConfigOverrides field to enable or disable optional components prior to cluster installation. Important Use the reference configuration as specified in the example SiteConfig CR. Adding additional components back into the system might require additional reserved CPU capacity. spec.clusters.clusterLabels Configure cluster labels to correspond to the bindingRules field in the PolicyGenTemplate CRs that you define. For example, policygentemplates/common-ranGen.yaml applies to all clusters with common: true set, policygentemplates/group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. spec.clusters.crTemplates.KlusterletAddonConfig Optional. Set KlusterletAddonConfig to KlusterletAddonConfigOverride.yaml to override the default `KlusterletAddonConfig that is created for the cluster. spec.clusters.nodes.hostName For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with {ztp}. spec.clusters.nodes.bmcCredentialsName Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. spec.clusters.nodes.bootMode Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. spec.clusters.nodes.rootDeviceHints Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . Values using a by-path scheme are preferred. For a detailed list of stable identifiers, see the "About root device hints" section. spec.clusters.nodes.cpuset Configure cpuset to match the value that you set in the cluster PerformanceProfile CR spec.cpu.reserved field for workload partitioning. For example, cpuset: "0-1,40-41" . spec.clusters.nodes.nodeNetwork Configure the network settings for the node. spec.clusters.nodes.nodeNetwork.config.interfaces.ipv6 Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. 19.3.6. Monitoring managed cluster installation progress The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure When the synchronization is complete, the installation generally proceeds as follows: The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands: Export the cluster name: USD export CLUSTER=<clusterName> Query the AgentClusterInstall CR for the managed cluster: USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq Get the installation events for the cluster: USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' 19.3.7. Troubleshooting GitOps ZTP by validating the installation CRs The ArgoCD pipeline uses the SiteConfig and PolicyGenTemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Check that the installation CRs were created by using the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from SiteConfig files to the installation CRs. Verify that the ManagedCluster CR was generated using the SiteConfig CR on the hub cluster: USD oc get managedcluster If the ManagedCluster is missing, check if the clusters application failed to synchronize the files from the Git repository to the hub cluster: USD oc describe -n openshift-gitops application clusters Check for the Status.Conditions field to view the error logs for the managed cluster. For example, setting an invalid value for extraManifestPath: in the SiteConfig CR raises the following error: Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonError Check the Status.Sync field. If there are log errors, the Status.Sync field could indicate an Unknown error: Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown 19.3.8. Troubleshooting {ztp} virtual media booting on Supermicro servers SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Disable TLS in the Provisioning resource by running the following command: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' Continue the steps to deploy your single-node OpenShift cluster. 19.3.9. Removing a managed cluster site from the ZTP pipeline You can remove a managed site and the associated installation and configuration policy CRs from the ZTP pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove a site and the associated CRs by removing the associated SiteConfig and PolicyGenTemplate files from the kustomization.yaml file. When you run the ZTP pipeline again, the generated CRs are removed. Optional: If you want to permanently remove a site, you should also remove the SiteConfig and site-specific PolicyGenTemplate files from the Git repository. Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the SiteConfig and site-specific PolicyGenTemplate CRs in the Git repository. Additional resources For information about removing a cluster, see Removing a cluster from management . 19.3.10. Removing obsolete content from the ZTP pipeline If a change to the PolicyGenTemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove the affected PolicyGenTemplate files from the Git repository, commit and push to the remote repository. Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster. Add the updated PolicyGenTemplate files back to the Git repository, and then commit and push to the remote repository. Note Removing zero touch provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster. Optional: As an alternative, after making changes to PolicyGenTemplate CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command: USD oc delete policy -n <namespace> <policy_name> 19.3.11. Tearing down the ZTP pipeline You can remove the ArgoCD pipeline and all generated ZTP artifacts. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster. Delete the kustomization.yaml file in the deployment directory using the following command: USD oc delete -k out/argocd/deployment Commit and push your changes to the site repository. 19.4. Configuring managed clusters with policies and PolicyGenTemplate resources Applied policy custom resources (CRs) configure the managed clusters that you provision. You can customize how Red Hat Advanced Cluster Management (RHACM) uses PolicyGenTemplate CRs to generate the applied policy CRs. 19.4.1. About the PolicyGenTemplate CRD The PolicyGenTemplate custom resource definition (CRD) tells the PolicyGen policy generator what custom resources (CRs) to include in the cluster configuration, how to combine the CRs into the generated policies, and what items in those CRs need to be updated with overlay content. The following example shows a PolicyGenTemplate CR ( common-du-ranGen.yaml ) extracted from the ztp-site-generate reference container. The common-du-ranGen.yaml file defines two Red Hat Advanced Cluster Management (RHACM) policies. The polices manage a collection of configuration CRs, one for each unique value of policyName in the CR. common-du-ranGen.yaml creates a single placement binding and a placement rule to bind the policies to clusters based on the labels listed in the bindingRules section. Example PolicyGenTemplate CR - common-du-ranGen.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "common" namespace: "ztp-common" spec: bindingRules: common: "true" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: "subscriptions-policy" - fileName: SriovSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: SriovSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: SriovOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: PtpSubscription.yaml policyName: "subscriptions-policy" - fileName: PtpSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: PtpSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: PtpOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: ClusterLogNS.yaml policyName: "subscriptions-policy" - fileName: ClusterLogOperGroup.yaml policyName: "subscriptions-policy" - fileName: ClusterLogSubscription.yaml policyName: "subscriptions-policy" - fileName: ClusterLogOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: StorageNS.yaml policyName: "subscriptions-policy" - fileName: StorageOperGroup.yaml policyName: "subscriptions-policy" - fileName: StorageSubscription.yaml policyName: "subscriptions-policy" - fileName: StorageOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: ReduceMonitoringFootprint.yaml policyName: "config-policy" - fileName: OperatorHub.yaml 3 policyName: "config-policy" - fileName: DefaultCatsrc.yaml 4 policyName: "config-policy" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: "config-policy" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io 1 common: "true" applies the policies to all clusters with this label. 2 Files listed under sourceFiles create the Operator policies for installed clusters. 3 OperatorHub.yaml configures the OperatorHub for the disconnected registry. 4 DefaultCatsrc.yaml configures the catalog source for the disconnected registry. 5 policyName: "config-policy" configures Operator subscriptions. The OperatorHub CR disables the default and this CR replaces redhat-operators with a CatalogSource CR that points to the disconnected registry. A PolicyGenTemplate CR can be constructed with any number of included CRs. Apply the following example CR in the hub cluster to generate a policy containing a single CR: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno" namespace: "ztp-group" spec: bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f0" ptp4lOpts: "-2 -s --summary_interval -4" phc2sysOpts: "-a -r -n 24" Using the source file PtpConfigSlave.yaml as an example, the file defines a PtpConfig CR. The generated policy for the PtpConfigSlave example is named group-du-sno-config-policy . The PtpConfig CR defined in the generated group-du-sno-config-policy is named du-ptp-slave . The spec defined in PtpConfigSlave.yaml is placed under du-ptp-slave along with the other spec items defined under the source file. The following example shows the group-du-sno-config-policy CR: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..... 19.4.2. Recommendations when customizing PolicyGenTemplate CRs Consider the following best practices when customizing site configuration PolicyGenTemplate custom resources (CRs): Use as few policies as are necessary. Using fewer policies requires less resources. Each additional policy creates overhead for the hub cluster and the deployed managed cluster. CRs are combined into policies based on the policyName field in the PolicyGenTemplate CR. CRs in the same PolicyGenTemplate which have the same value for policyName are managed under a single policy. In disconnected environments, use a single catalog source for all Operators by configuring the registry as a single index containing all Operators. Each additional CatalogSource CR on the managed clusters increases CPU usage. MachineConfig CRs should be included as extraManifests in the SiteConfig CR so that they are applied during installation. This can reduce the overall time taken until the cluster is ready to deploy applications. PolicyGenTemplates should override the channel field to explicitly identify the desired version. This ensures that changes in the source CR during upgrades does not update the generated subscription. Additional resources For recommendations about scaling clusters with RHACM, see Performance and scalability . Note When managing large numbers of spoke clusters on the hub cluster, minimize the number of policies to reduce resource consumption. Grouping multiple configuration CRs into a single or limited number of policies is one way to reduce the overall number of policies on the hub cluster. When using the common, group, and site hierarchy of policies for managing site configuration, it is especially important to combine site-specific configuration into a single policy. 19.4.3. PolicyGenTemplate CRs for RAN deployments Use PolicyGenTemplate (PGT) custom resources (CRs) to customize the configuration applied to the cluster by using the GitOps zero touch provisioning (ZTP) pipeline. The PGT CR allows you to generate one or more policies to manage the set of configuration CRs on your fleet of clusters. The PGT identifies the set of managed CRs, bundles them into policies, builds the policy wrapping around those CRs, and associates the policies with clusters by using label binding rules. The reference configuration, obtained from the GitOps ZTP container, is designed to provide a set of critical features and node tuning settings that ensure the cluster can support the stringent performance and resource utilization constraints typical of RAN (Radio Access Network) Distributed Unit (DU) applications. Changes or omissions from the baseline configuration can affect feature availability, performance, and resource utilization. Use the reference PolicyGenTemplate CRs as the basis to create a hierarchy of configuration files tailored to your specific site requirements. The baseline PolicyGenTemplate CRs that are defined for RAN DU cluster configuration can be extracted from the GitOps ZTP ztp-site-generate container. See "Preparing the GitOps ZTP site configuration repository" for further details. The PolicyGenTemplate CRs can be found in the ./out/argocd/example/policygentemplates folder. The reference architecture has common, group, and site-specific configuration CRs. Each PolicyGenTemplate CR refers to other CRs that can be found in the ./out/source-crs folder. The PolicyGenTemplate CRs relevant to RAN cluster configuration are described below. Variants are provided for the group PolicyGenTemplate CRs to account for differences in single-node, three-node compact, and standard cluster configurations. Similarly, site-specific configuration variants are provided for single-node clusters and multi-node (compact or standard) clusters. Use the group and site-specific configuration variants that are relevant for your deployment. Table 19.4. PolicyGenTemplate CRs for RAN deployments PolicyGenTemplate CR Description example-multinode-site.yaml Contains a set of CRs that get applied to multi-node clusters. These CRs configure SR-IOV features typical for RAN installations. example-sno-site.yaml Contains a set of CRs that get applied to single-node OpenShift clusters. These CRs configure SR-IOV features typical for RAN installations. common-ranGen.yaml Contains a set of common RAN CRs that get applied to all clusters. These CRs subscribe to a set of operators providing cluster features typical for RAN as well as baseline cluster tuning. group-du-3node-ranGen.yaml Contains the RAN policies for three-node clusters only. group-du-sno-ranGen.yaml Contains the RAN policies for single-node clusters only. group-du-standard-ranGen.yaml Contains the RAN policies for standard three control-plane clusters. group-du-3node-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for three-node clusters. group-du-standard-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for standard clusters. group-du-sno-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for single-node OpenShift clusters. Additional resources Preparing the GitOps ZTP site configuration repository 19.4.4. Customizing a managed cluster with PolicyGenTemplate CRs Use the following procedure to customize the policies that get applied to the managed cluster that you provision using the zero touch provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a PolicyGenTemplate CR for site-specific configuration CRs. Choose the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, example-sno-site.yaml or example-multinode-site.yaml . Change the bindingRules field in the example file to match the site-specific label included in the SiteConfig CR. In the example SiteConfig file, the site-specific label is sites: example-sno . Note Ensure that the labels defined in your PolicyGenTemplate bindingRules field correspond to the labels that are defined in the related managed clusters SiteConfig CR. Change the content in the example file to match the desired configuration. Optional: Create a PolicyGenTemplate CR for any common configuration CRs that apply to the entire fleet of clusters. Select the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, common-ranGen.yaml . Change the content in the example file to match the desired configuration. Optional: Create a PolicyGenTemplate CR for any group configuration CRs that apply to the certain groups of clusters in the fleet. Ensure that the content of the overlaid spec files matches your desired end state. As a reference, the out/source-crs directory contains the full list of source-crs available to be included and overlaid by your PolicyGenTemplate templates. Note Depending on the specific requirements of your clusters, you might need more than a single group policy per cluster type, especially considering that the example group policies each have a single PerformancePolicy.yaml file that can only be shared across a set of clusters if those clusters consist of identical hardware configurations. Select the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, group-du-sno-ranGen.yaml . Change the content in the example file to match the desired configuration. Optional. Create a validator inform policy PolicyGenTemplate CR to signal when the ZTP installation and configuration of the deployed cluster is complete. For more information, see "Creating a validator inform policy". Define all the policy namespaces in a YAML file similar to the example out/argocd/example/policygentemplates/ns.yaml file. Important Do not include the Namespace CR in the same file with the PolicyGenTemplate CR. Add the PolicyGenTemplate CRs and Namespace CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/policygentemplates/kustomization.yaml . Commit the PolicyGenTemplate CRs, Namespace CR, and associated kustomization.yaml file in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. You can push the changes to the SiteConfig CR and the PolicyGenTemplate CR simultaneously. Additional resources Signalling ZTP cluster deployment completion with validator inform policies 19.4.5. Monitoring managed cluster policy deployment progress The ArgoCD pipeline uses PolicyGenTemplate CRs in Git to generate the RHACM policies and then sync them to the hub cluster. You can monitor the progress of the managed cluster policy synchronization after the assisted service installs OpenShift Container Platform on the managed cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure The Topology Aware Lifecycle Manager (TALM) applies the configuration policies that are bound to the cluster. After the cluster installation is complete and the cluster becomes Ready , a ClusterGroupUpgrade CR corresponding to this cluster, with a list of ordered policies defined by the ran.openshift.io/ztp-deploy-wave annotations , is automatically created by the TALM. The cluster's policies are applied in the order listed in ClusterGroupUpgrade CR. You can monitor the high-level progress of configuration policy reconciliation by using the following commands: USD export CLUSTER=<clusterName> USD oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq Example output { "lastTransitionTime": "2022-11-09T07:28:09Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } You can monitor the detailed cluster policy compliance status by using the RHACM dashboard or the command line. To check policy compliance by using oc , run the following command: USD oc get policies -n USDCLUSTER Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m To check policy status from the RHACM web console, perform the following actions: Click Governance Find policies . Click on a cluster policy to check it's status. When all of the cluster policies become compliant, ZTP installation and configuration for the cluster is complete. The ztp-done label is added to the cluster. In the reference configuration, the final policy that becomes compliant is the one defined in the *-du-validator-policy policy. This policy, when compliant on a cluster, ensures that all cluster configuration, Operator installation, and Operator configuration is complete. 19.4.6. Validating the generation of configuration policy CRs Policy custom resources (CRs) are generated in the same namespace as the PolicyGenTemplate from which they are created. The same troubleshooting flow applies to all policy CRs generated from a PolicyGenTemplate regardless of whether they are ztp-common , ztp-group , or ztp-site based, as shown using the following commands: USD export NS=<namespace> USD oc get policy -n USDNS The expected set of policy-wrapped CRs should be displayed. If the policies failed synchronization, use the following troubleshooting steps. Procedure To display detailed information about the policies, run the following command: USD oc describe -n openshift-gitops application policies Check for Status: Conditions: to show the error logs. For example, setting an invalid sourceFile->fileName: generates the error shown below: Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError Check for Status: Sync: . If there are log errors at Status: Conditions: , the Status: Sync: shows Unknown or Error : Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error When Red Hat Advanced Cluster Management (RHACM) recognizes that policies apply to a ManagedCluster object, the policy CR objects are applied to the cluster namespace. Check to see if the policies were copied to the cluster namespace: USD oc get policy -n USDCLUSTER Example output: NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d RHACM copies all applicable policies into the cluster namespace. The copied policy names have the format: <policyGenTemplate.Namespace>.<policyGenTemplate.Name>-<policyName> . Check the placement rule for any policies not copied to the cluster namespace. The matchSelector in the PlacementRule for those policies should match labels on the ManagedCluster object: USD oc get placementrule -n USDNS Note the PlacementRule name appropriate for the missing policy, common, group, or site, using the following command: USD oc get placementrule -n USDNS <placementRuleName> -o yaml The status-decisions should include your cluster name. The key-value pair of the matchSelector in the spec must match the labels on your managed cluster. Check the labels on the ManagedCluster object using the following command: USD oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq Check to see which policies are compliant using the following command: USD oc get policy -n USDCLUSTER If the Namespace , OperatorGroup , and Subscription policies are compliant but the Operator configuration policies are not, it is likely that the Operators did not install on the managed cluster. This causes the Operator configuration policies to fail to apply because the CRD is not yet applied to the spoke. 19.4.7. Restarting policy reconciliation You can restart policy reconciliation when unexpected compliance issues occur, for example, when the ClusterGroupUpgrade custom resource (CR) has timed out. Procedure A ClusterGroupUpgrade CR is generated in the namespace ztp-install by the Topology Aware Lifecycle Manager after the managed cluster becomes Ready : USD export CLUSTER=<clusterName> USD oc get clustergroupupgrades -n ztp-install USDCLUSTER If there are unexpected issues and the policies fail to become complaint within the configured timeout (the default is 4 hours), the status of the ClusterGroupUpgrade CR shows UpgradeTimedOut : USD oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Ready")]}' A ClusterGroupUpgrade CR in the UpgradeTimedOut state automatically restarts its policy reconciliation every hour. If you have changed your policies, you can start a retry immediately by deleting the existing ClusterGroupUpgrade CR. This triggers the automatic creation of a new ClusterGroupUpgrade CR that begins reconciling the policies immediately: USD oc delete clustergroupupgrades -n ztp-install USDCLUSTER Note that when the ClusterGroupUpgrade CR completes with status UpgradeCompleted and the managed cluster has the label ztp-done applied, you can make additional configuration changes using PolicyGenTemplate . Deleting the existing ClusterGroupUpgrade CR will not make the TALM generate a new CR. At this point, ZTP has completed its interaction with the cluster and any further interactions should be treated as an update and a new ClusterGroupUpgrade CR created for remediation of the policies. Additional resources For information about using Topology Aware Lifecycle Manager (TALM) to construct your own ClusterGroupUpgrade CR, see About the ClusterGroupUpgrade CR . 19.4.8. Changing applied managed cluster CRs using policies You can remove content from a custom resource (CR) that is deployed in a managed cluster through a policy. By default, all Policy CRs created from a PolicyGenTemplate CR have the complianceType field set to musthave . A musthave policy without the removed content is still compliant because the CR on the managed cluster has all the specified content. With this configuration, when you remove content from a CR, TALM removes the content from the policy but the content is not removed from the CR on the managed cluster. With the complianceType field to mustonlyhave , the policy ensures that the CR on the cluster is an exact match of what is specified in the policy. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have deployed a managed cluster from a hub cluster running RHACM. You have installed Topology Aware Lifecycle Manager on the hub cluster. Procedure Remove the content that you no longer need from the affected CRs. In this example, the disableDrain: false line was removed from the SriovOperatorConfig CR. Example CR apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" disableDrain: true enableInjector: true enableOperatorWebhook: true Change the complianceType of the affected policies to mustonlyhave in the group-du-sno-ranGen.yaml file. Example YAML # ... - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave # ... Create a ClusterGroupUpdates CR and specify the clusters that must receive the CR changes:: Example ClusterGroupUpdates CR apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction: Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-remove.yaml When you are ready to apply the changes, for example, during an appropriate maintenance window, change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the policies by running the following command: USD oc get <kind> <changed_cr_name> Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h When the COMPLIANCE STATE of the policy is Compliant , it means that the CR is updated and the unwanted content is removed. Check that the policies are removed from the targeted clusters by running the following command on the managed clusters: USD oc get <kind> <changed_cr_name> If there are no results, the CR is removed from the managed cluster. 19.4.9. Indication of done for ZTP installations Zero touch provisioning (ZTP) simplifies the process of checking the ZTP installation status for a cluster. The ZTP status moves through three phases: cluster installation, cluster configuration, and ZTP done. Cluster installation phase The cluster installation phase is shown by the ManagedClusterJoined and ManagedClusterAvailable conditions in the ManagedCluster CR . If the ManagedCluster CR does not have these conditions, or the condition is set to False , the cluster is still in the installation phase. Additional details about installation are available from the AgentClusterInstall and ClusterDeployment CRs. For more information, see "Troubleshooting GitOps ZTP". Cluster configuration phase The cluster configuration phase is shown by a ztp-running label applied the ManagedCluster CR for the cluster. ZTP done Cluster installation and configuration is complete in the ZTP done phase. This is shown by the removal of the ztp-running label and addition of the ztp-done label to the ManagedCluster CR. The ztp-done label shows that the configuration has been applied and the baseline DU configuration has completed cluster tuning. The transition to the ZTP done state is conditional on the compliant state of a Red Hat Advanced Cluster Management (RHACM) validator inform policy. This policy captures the existing criteria for a completed installation and validates that it moves to a compliant state only when ZTP provisioning of the managed cluster is complete. The validator inform policy ensures the configuration of the cluster is fully applied and Operators have completed their initialization. The policy validates the following: The target MachineConfigPool contains the expected entries and has finished updating. All nodes are available and not degraded. The SR-IOV Operator has completed initialization as indicated by at least one SriovNetworkNodeState with syncStatus: Succeeded . The PTP Operator daemon set exists. 19.5. Manually installing a single-node OpenShift cluster with ZTP You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service. Note If you are creating multiple managed clusters, use the SiteConfig method described in Deploying far edge sites with ZTP . Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads . 19.5.1. Generating ZTP installation and configuration CRs manually Use the generator entrypoint for the ztp-site-generate container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig and PolicyGenTemplate CRs. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create an output folder by running the following command: USD mkdir -p ./out Export the argocd directory from the ztp-site-generate container image: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./out The ./out directory has the reference PolicyGenTemplate and SiteConfig CRs in the out/argocd/example/ folder. Example output out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml Create an output folder for the site installation CRs: USD mkdir -p ./site-install Modify the example SiteConfig CR for the cluster type that you want to install. Copy example-sno.yaml to site-1-sno.yaml and modify the CR to match the details of the site and bare-metal host that you want to install, for example: Example single-node OpenShift cluster SiteConfig CR apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "<site_name>" namespace: "<site_name>" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" 1 clusterImageSetNameRef: "openshift-4.12" 2 sshPublicKey: "ssh-rsa AAAA..." 3 clusters: - clusterName: "<site_name>" networkType: "OVNKubernetes" clusterLabels: 4 common: true group-du-sno: "" sites : "<site_name>" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" 5 nodes: - hostName: "example-node.example.com" 6 role: "master" bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ 7 bmcCredentialsName: name: "bmh-secret" 8 bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" 9 rootDeviceHints: wwn: "0x11111000000asd123" cpuset: "0-1,52-53" 10 nodeNetwork: 11 interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: 12 enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 1 Create the assisted-deployment-pull-secret CR with the same namespace as the SiteConfig CR. 2 clusterImageSetNameRef defines an image set available on the hub cluster. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . 3 Configure the SSH public key used to access the cluster. 4 Cluster labels must correspond to the bindingRules field in the PolicyGenTemplate CRs that you define. For example, policygentemplates/common-ranGen.yaml applies to all clusters with common: true set, policygentemplates/group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. 5 Optional. The CR specifed under KlusterletAddonConfig is used to override the default KlusterletAddonConfig that is created for the cluster. 6 For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . 7 BMC address that you use to access the host. Applies to all cluster types. 8 Name of the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. 9 Configures the boot mode for the host. The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. 10 cpuset must match the value set in the cluster PerformanceProfile CR spec.cpu.reserved field for workload partitioning. 11 Specifies the network settings for the node. 12 Configures the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Generate the day-0 installation CRs by processing the modified SiteConfig CR site-1-sno.yaml by running the following command: USD podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator install site-1-sno.yaml /output Example output site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml Optional: Generate just the day-0 MachineConfig installation CRs for a particular cluster type by processing the reference SiteConfig CR with the -E option. For example, run the following commands: Create an output folder for the MachineConfig CRs: USD mkdir -p ./site-machineconfig Generate the MachineConfig installation CRs: USD podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator install -E site-1-sno.yaml /output Example output site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml Generate and export the day-2 configuration CRs using the reference PolicyGenTemplate CRs from the step. Run the following commands: Create an output folder for the day-2 CRs: USD mkdir -p ./ref Generate and export the day-2 configuration CRs: USD podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator config -N . /output The command generates example group and site-specific PolicyGenTemplate CRs for single-node OpenShift, three-node clusters, and standard clusters in the ./ref folder. Example output ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete. Additional resources Workload partitioning BMC addressing 19.5.2. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the ZTP pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 19.5.3. Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP The GitOps ZTP workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.12, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. You have manually generated the installation and configuration custom resources (CRs). Procedure Edit the spec.kernelArguments specification in the InfraEnv CR to configure kernel arguments: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Note The SiteConfig CR generates the InfraEnv resource as part of the day-0 installation CRs. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 19.5.4. Installing a single managed cluster You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM). Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created the baseboard management controller (BMC) Secret and the image pull-secret Secret custom resources (CRs). See "Creating the managed bare-metal host secrets" for details. Your target bare-metal host meets the networking and hardware requirements for managed clusters. Procedure Create a ClusterImageSet for each specific cluster version to be deployed, for example clusterImageSet-4.12.yaml . A ClusterImageSet has the following format: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.12.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.12.0-x86_64 2 1 The descriptive version that you want to deploy. 2 Specifies the releaseImage to deploy and determines the operating system image version. The discovery ISO is based on the image version as set by releaseImage , or the latest version if the exact version is unavailable. Apply the clusterImageSet CR: USD oc apply -f clusterImageSet-4.12.yaml Create the Namespace CR in the cluster-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2 1 2 The name of the managed cluster to provision. Apply the Namespace CR by running the following command: USD oc apply -f cluster-namespace.yaml Apply the generated day-0 CRs that you extracted from the ztp-site-generate container and customized to meet your requirements: USD oc apply -R ./site-install/site-sno-1 Additional resources Connectivity prerequisites for managed cluster networks 19.5.5. Monitoring the managed cluster installation status Ensure that cluster provisioning was successful by checking the cluster status. Prerequisites All of the custom resources have been configured and provisioned, and the Agent custom resource is created on the hub for the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster True indicates the managed cluster is ready. Check the agent status: USD oc get agent -n <cluster_name> Use the describe command to provide an in-depth description of the agent's condition. Statuses to be aware of include BackendError , InputError , ValidationsFailing , InstallationFailed , and AgentIsConnected . These statuses are relevant to the Agent and AgentClusterInstall custom resources. USD oc describe agent -n <cluster_name> Check the cluster provisioning status: USD oc get agentclusterinstall -n <cluster_name> Use the describe command to provide an in-depth description of the cluster provisioning status: USD oc describe agentclusterinstall -n <cluster_name> Check the status of the managed cluster's add-on services: USD oc get managedclusteraddon -n <cluster_name> Retrieve the authentication information of the kubeconfig file for the managed cluster: USD oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig 19.5.6. Troubleshooting the managed cluster Use this procedure to diagnose any installation issues that might occur with the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h If the status in the AVAILABLE column is True , the managed cluster is being managed by the hub. If the status in the AVAILABLE column is Unknown , the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information. Check the AgentClusterInstall install status: USD oc get clusterdeployment -n <cluster_name> Example output NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h If the status in the INSTALLED column is false , the installation was unsuccessful. If the installation failed, enter the following command to review the status of the AgentClusterInstall resource: USD oc describe agentclusterinstall -n <cluster_name> <cluster_name> Resolve the errors and reset the cluster: Remove the cluster's managed cluster resource: USD oc delete managedcluster <cluster_name> Remove the cluster's namespace: USD oc delete namespace <cluster_name> This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the ManagedCluster CR deletion to complete before proceeding. Recreate the custom resources for the managed cluster. 19.5.7. RHACM generated cluster installation CRs reference Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using SiteConfig CRs for each site. Note Every managed cluster has its own namespace, and all of the installation CRs except for ManagedCluster and ClusterImageSet are under that namespace. ManagedCluster and ClusterImageSet are cluster-scoped, not namespace-scoped. The namespace and the CR names match the cluster name. The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the SiteConfig CRs that you configure. Table 19.5. Cluster installation CRs generated by RHACM CR Description Usage BareMetalHost Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. InfraEnv Contains information for installing OpenShift Container Platform on the target bare-metal host. Used with ClusterDeployment to generate the discovery ISO for the managed cluster. AgentClusterInstall Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster kubeconfig and credentials when the installation is complete. Specifies the managed cluster configuration information and provides status during the installation of the cluster. ClusterDeployment References the AgentClusterInstall CR to use. Used with InfraEnv to generate the discovery ISO for the managed cluster. NMStateConfig Provides network configuration information such as MAC address to IP mapping, DNS server, default route, and other network settings. Sets up a static IP address for the managed cluster's Kube API server. Agent Contains hardware information about the target bare-metal host. Created automatically on the hub when the target machine's discovery image boots. ManagedCluster When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. The hub uses this resource to manage and show the status of managed clusters. KlusterletAddonConfig Contains the list of services provided by the hub to be deployed to the ManagedCluster resource. Tells the hub which addon services to deploy to the ManagedCluster resource. Namespace Logical space for ManagedCluster resources existing on the hub. Unique per site. Propagates resources to the ManagedCluster . Secret Two CRs are created: BMC Secret and Image Pull Secret . BMC Secret authenticates into the target bare-metal host using its username and password. Image Pull Secret contains authentication information for the OpenShift Container Platform image installed on the target bare-metal host. ClusterImageSet Contains OpenShift Container Platform image information such as the repository and image name. Passed into resources to provide OpenShift Container Platform images. 19.6. Recommended single-node OpenShift cluster configuration for vDU application workloads Use the following reference information to understand the single-node OpenShift configurations required to deploy virtual distributed unit (vDU) applications in the cluster. Configurations include cluster optimizations for high performance workloads, enabling workload partitioning, and minimizing the number of reboots required postinstallation. Additional resources To deploy a single cluster by hand, see Manually installing a single-node OpenShift cluster with ZTP . To deploy a fleet of clusters using GitOps zero touch provisioning (ZTP), see Deploying far edge sites with ZTP . 19.6.1. Running low latency applications on OpenShift Container Platform OpenShift Container Platform enables low latency processing for applications running on commercial off-the-shelf (COTS) hardware by using several technologies and specialized hardware devices: Real-time kernel for RHCOS Ensures workloads are handled with a high degree of process determinism. CPU isolation Avoids CPU scheduling delays and ensures CPU capacity is available consistently. NUMA-aware topology management Aligns memory and huge pages with CPU and PCI devices to pin guaranteed container memory and huge pages to the non-uniform memory access (NUMA) node. Pod resources for all Quality of Service (QoS) classes stay on the same NUMA node. This decreases latency and improves performance of the node. Huge pages memory management Using huge page sizes improves system performance by reducing the amount of system resources required to access page tables. Precision timing synchronization using PTP Allows synchronization between nodes in the network with sub-microsecond accuracy. 19.6.2. Recommended cluster host requirements for vDU application workloads Running vDU application workloads requires a bare-metal host with sufficient resources to run OpenShift Container Platform services and production workloads. Table 19.6. Minimum resource requirements Profile vCPU Memory Storage Minimum 4 to 8 vCPU 32GB of RAM 120GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Important The server must have a Baseboard Management Controller (BMC) when booting with virtual media. 19.6.3. Configuring host firmware for low latency and high performance Bare-metal hosts require the firmware to be configured before the host can be provisioned. The firmware configuration is dependent on the specific hardware and the particular requirements of your installation. Procedure Set the UEFI/BIOS Boot Mode to UEFI . In the host boot sequence order, set Hard drive first . Apply the specific firmware configuration for your hardware. The following table describes a representative firmware configuration for an Intel Xeon Skylake or Intel Cascade Lake server, based on the Intel FlexRAN 4G and 5G baseband PHY reference design. Important The exact firmware configuration depends on your specific hardware and network requirements. The following sample configuration is for illustrative purposes only. Table 19.7. Sample firmware configuration for an Intel Xeon Skylake or Cascade Lake server Firmware setting Configuration CPU Power and Performance Policy Performance Uncore Frequency Scaling Disabled Performance P-limit Disabled Enhanced Intel SpeedStep (R) Tech Enabled Intel Configurable TDP Enabled Configurable TDP Level Level 2 Intel(R) Turbo Boost Technology Enabled Energy Efficient Turbo Disabled Hardware P-States Disabled Package C-State C0/C1 state C1E Disabled Processor C6 Disabled Note Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments. 19.6.4. Connectivity prerequisites for managed cluster networks Before you can install and provision a managed cluster with the zero touch provisioning (ZTP) GitOps pipeline, the managed cluster host must meet the following networking prerequisites: There must be bi-directional connectivity between the ZTP GitOps container in the hub cluster and the Baseboard Management Controller (BMC) of the target bare-metal host. The managed cluster must be able to resolve and reach the API hostname of the hub hostname and *.apps hostname. Here is an example of the API hostname of the hub and *.apps hostname: api.hub-cluster.internal.domain.com console-openshift-console.apps.hub-cluster.internal.domain.com The hub cluster must be able to resolve and reach the API and *.apps hostname of the managed cluster. Here is an example of the API hostname of the managed cluster and *.apps hostname: api.sno-managed-cluster-1.internal.domain.com console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com 19.6.5. Workload partitioning in single-node OpenShift with GitOps ZTP Workload partitioning configures OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved number of host CPUs. To configure workload partitioning with GitOps ZTP, you specify cluster management CPU resources with the cpuset field of the SiteConfig custom resource (CR) and the reserved field of the group PolicyGenTemplate CR. The GitOps ZTP pipeline uses these values to populate the required fields in the workload partitioning MachineConfig CR ( cpuset ) and the PerformanceProfile CR ( reserved ) that configure the single-node OpenShift cluster. Note For maximum performance, ensure that the reserved and isolated CPU sets do not share CPU cores across NUMA zones. The workload partitioning MachineConfig CR pins the OpenShift Container Platform infrastructure pods to a defined cpuset configuration. The PerformanceProfile CR pins the systemd services to the reserved CPUs. Important The value for the reserved field specified in the PerformanceProfile CR must match the cpuset field in the workload partitioning MachineConfig CR. Additional resources For the recommended single-node OpenShift workload partitioning configuration, see Workload partitioning . 19.6.6. Recommended installation-time cluster configurations The ZTP pipeline applies the following custom resources (CRs) during cluster installation. These configuration CRs ensure that the cluster meets the feature and performance requirements necessary for running a vDU application. Note When using the ZTP GitOps plugin and SiteConfig CRs for cluster deployment, the following MachineConfig CRs are included by default. Use the SiteConfig extraManifests filter to alter the CRs that are included by default. For more information, see Advanced managed cluster configuration with SiteConfig CRs . 19.6.6.1. Workload partitioning Single-node OpenShift clusters that run DU workloads require workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. Note Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation. However, you can reconfigure workload partitioning by updating the cpu value that you define in the performance profile, and in the related MachineConfig custom resource (CR). The base64-encoded CR that enables workload partitioning contains the CPU set that the management workloads are constrained to. Encode host-specific values for crio.conf and kubelet.conf in base64. Adjust the content to match the CPU set that is specified in the cluster performance profile. It must match the number of cores in the cluster host. Recommended workload partitioning configuration apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMSw1Mi01MyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root When configured in the cluster host, the contents of /etc/crio/crio.conf.d/01-workload-partitioning should look like this: [crio.runtime.workloads.management] activation_annotation = "target.workload.openshift.io/management" annotation_prefix = "resources.workload.openshift.io" resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" } 1 1 The cpuset value varies based on the installation. If Hyper-Threading is enabled, specify both threads for each core. The cpuset value must match the reserved CPUs that you define in the spec.cpu.reserved field in the performance profile. When configured in the cluster, the contents of /etc/kubernetes/openshift-workload-pinning should look like this: { "management": { "cpuset": "0-1,52-53" 1 } } 1 The cpuset must match the cpuset value in /etc/crio/crio.conf.d/01-workload-partitioning . Verification Check that the applications and cluster system CPU pinning is correct. Run the following commands: Open a remote shell connection to the managed cluster: USD oc debug node/example-sno-1 Check that the OpenShift infrastructure applications CPU pinning is correct: sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done Example output pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53 Check that the system applications CPU pinning is correct: sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done Example output pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53 19.6.6.2. Reduced platform management footprint To reduce the overall management footprint of the platform, a MachineConfig custom resource (CR) is required that places all Kubernetes-specific mount points in a new namespace separate from the host operating system. The following base64-encoded example MachineConfig CR illustrates this configuration. Recommended container mount namespace configuration apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} enabled: true name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 19.6.6.3. SCTP Stream Control Transmission Protocol (SCTP) is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol. Recommended SCTP configuration apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 19.6.6.4. Accelerated container startup The following MachineConfig CR configures core OpenShift processes and containers to use all available CPU cores during system startup and shutdown. This accelerates the system recovery during initial boot and reboots. Recommended accelerated container startup configuration apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 04-accelerated-container-startup-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,#!/bin/bash
#
# Temporarily reset the core system processes's CPU affinity to be unrestricted to accelerate startup and shutdown
#
# The defaults below can be overridden via environment variables
#

# The default set of critical processes whose affinity should be temporarily unbound:
CRITICAL_PROCESSES=${CRITICAL_PROCESSES:-"crio kubelet NetworkManager conmon dbus"}

# Default wait time is 600s = 10m:
MAXIMUM_WAIT_TIME=${MAXIMUM_WAIT_TIME:-600}

# Default steady-state threshold = 2%
# Allowed values:
#  4  - absolute pod count (+/-)
#  4% - percent change (+/-)
#  -1 - disable the steady-state check
STEADY_STATE_THRESHOLD=${STEADY_STATE_THRESHOLD:-2%}

# Default steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
STEADY_STATE_WINDOW=${STEADY_STATE_WINDOW:-60}

# Default steady-state allows any pod count to be "steady state"
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
STEADY_STATE_MINIMUM=${STEADY_STATE_MINIMUM:-0}

#######################################################

KUBELET_CPU_STATE=/var/lib/kubelet/cpu_manager_state
FULL_CPU_STATE=/sys/fs/cgroup/cpuset/cpuset.cpus
KUBELET_CONF=/etc/kubernetes/kubelet.conf
unrestrictedCpuset() {
  local cpus
  if [[ -e $KUBELET_CPU_STATE ]]; then
    cpus=$(jq -r '.defaultCpuSet' <$KUBELET_CPU_STATE)
    if [[ -n "${cpus}" && -e ${KUBELET_CONF} ]]; then
      reserved_cpus=$(jq -r '.reservedSystemCPUs' </etc/kubernetes/kubelet.conf)
      if [[ -n "${reserved_cpus}" ]]; then
        # Use taskset to merge the two cpusets
        cpus=$(taskset -c "${reserved_cpus},${cpus}" grep -i Cpus_allowed_list /proc/self/status | awk '{print $2}')
      fi
    fi
  fi
  if [[ -z $cpus ]]; then
    # fall back to using all cpus if the kubelet state is not configured yet
    [[ -e $FULL_CPU_STATE ]] || return 1
    cpus=$(<$FULL_CPU_STATE)
  fi
  echo $cpus
}

restrictedCpuset() {
  for arg in $(</proc/cmdline); do
    if [[ $arg =~ ^systemd.cpu_affinity= ]]; then
      echo ${arg#*=}
      return 0
    fi
  done
  return 1
}

resetAffinity() {
  local cpuset="$1"
  local failcount=0
  local successcount=0
  logger "Recovery: Setting CPU affinity for critical processes \"$CRITICAL_PROCESSES\" to $cpuset"
  for proc in $CRITICAL_PROCESSES; do
    local pids="$(pgrep $proc)"
    for pid in $pids; do
      local tasksetOutput
      tasksetOutput="$(taskset -apc "$cpuset" $pid 2>&1)"
      if [[ $? -ne 0 ]]; then
        echo "ERROR: $tasksetOutput"
        ((failcount++))
      else
        ((successcount++))
      fi
    done
  done

  logger "Recovery: Re-affined $successcount pids successfully"
  if [[ $failcount -gt 0 ]]; then
    logger "Recovery: Failed to re-affine $failcount processes"
    return 1
  fi
}

setUnrestricted() {
  logger "Recovery: Setting critical system processes to have unrestricted CPU access"
  resetAffinity "$(unrestrictedCpuset)"
}

setRestricted() {
  logger "Recovery: Resetting critical system processes back to normally restricted access"
  resetAffinity "$(restrictedCpuset)"
}

currentAffinity() {
  local pid="$1"
  taskset -pc $pid | awk -F': ' '{print $2}'
}

within() {
  local last=$1 current=$2 threshold=$3
  local delta=0 pchange
  delta=$(( current - last ))
  if [[ $current -eq $last ]]; then
    pchange=0
  elif [[ $last -eq 0 ]]; then
    pchange=1000000
  else
    pchange=$(( ( $delta * 100) / last ))
  fi
  echo -n "last:$last current:$current delta:$delta pchange:${pchange}%: "
  local absolute limit
  case $threshold in
    *%)
      absolute=${pchange##-} # absolute value
      limit=${threshold%%%}
      ;;
    *)
      absolute=${delta##-} # absolute value
      limit=$threshold
      ;;
  esac
  if [[ $absolute -le $limit ]]; then
    echo "within (+/-)$threshold"
    return 0
  else
    echo "outside (+/-)$threshold"
    return 1
  fi
}

steadystate() {
  local last=$1 current=$2
  if [[ $last -lt $STEADY_STATE_MINIMUM ]]; then
    echo "last:$last current:$current Waiting to reach $STEADY_STATE_MINIMUM before checking for steady-state"
    return 1
  fi
  within $last $current $STEADY_STATE_THRESHOLD
}

waitForReady() {
  logger "Recovery: Waiting ${MAXIMUM_WAIT_TIME}s for the initialization to complete"
  local lastSystemdCpuset="$(currentAffinity 1)"
  local lastDesiredCpuset="$(unrestrictedCpuset)"
  local t=0 s=10
  local lastCcount=0 ccount=0 steadyStateTime=0
  while [[ $t -lt $MAXIMUM_WAIT_TIME ]]; do
    sleep $s
    ((t += s))
    # Re-check the current affinity of systemd, in case some other process has changed it
    local systemdCpuset="$(currentAffinity 1)"
    # Re-check the unrestricted Cpuset, as the allowed set of unreserved cores may change as pods are assigned to cores
    local desiredCpuset="$(unrestrictedCpuset)"
    if [[ $systemdCpuset != $lastSystemdCpuset || $lastDesiredCpuset != $desiredCpuset ]]; then
      resetAffinity "$desiredCpuset"
      lastSystemdCpuset="$(currentAffinity 1)"
      lastDesiredCpuset="$desiredCpuset"
    fi

    # Detect steady-state pod count
    ccount=$(crictl ps | wc -l)
    if steadystate $lastCcount $ccount; then
      ((steadyStateTime += s))
      echo "Steady-state for ${steadyStateTime}s/${STEADY_STATE_WINDOW}s"
      if [[ $steadyStateTime -ge $STEADY_STATE_WINDOW ]]; then
        logger "Recovery: Steady-state (+/- $STEADY_STATE_THRESHOLD) for ${STEADY_STATE_WINDOW}s: Done"
        return 0
      fi
    else
      if [[ $steadyStateTime -gt 0 ]]; then
        echo "Resetting steady-state timer"
        steadyStateTime=0
      fi
    fi
    lastCcount=$ccount
  done
  logger "Recovery: Recovery Complete Timeout"
}

main() {
  if ! unrestrictedCpuset >&/dev/null; then
    logger "Recovery: No unrestricted Cpuset could be detected"
    return 1
  fi

  if ! restrictedCpuset >&/dev/null; then
    logger "Recovery: No restricted Cpuset has been configured.  We are already running unrestricted."
    return 0
  fi

  # Ensure we reset the CPU affinity when we exit this script for any reason
  # This way either after the timer expires or after the process is interrupted
  # via ^C or SIGTERM, we return things back to the way they should be.
  trap setRestricted EXIT

  logger "Recovery: Recovery Mode Starting"
  setUnrestricted
  waitForReady
}

if [[ "${BASH_SOURCE[0]}" = "${0}" ]]; then
  main "${@}"
  exit $?
fi
 mode: 493 path: /usr/local/bin/accelerated-container-startup.sh systemd: units: - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container startup [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: accelerated-container-startup.service - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container shutdown DefaultDependencies=no [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=-1 # Steady-state window = 60s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=60 [Install] WantedBy=shutdown.target reboot.target halt.target enabled: true name: accelerated-container-shutdown.service 19.6.6.5. Automatic kernel crash dumps with kdump The kdump Linux kernel feature creates a kernel crash dump when the kernel crashes. The kdump feature is enabled with the following MachineConfig CRs. Recommended MachineConfig CR to remove ice driver from control plane kdump logs ( 05-kdump-config-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh Recommended control plane node kdump configuration ( 06-kdump-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M Recommended MachineConfig CR to remove ice driver from worker node kdump logs ( 05-kdump-config-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh Recommended kdump worker node configuration ( 06-kdump-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 19.6.7. Recommended postinstallation cluster configurations When the cluster installation is complete, the ZTP pipeline applies the following custom resources (CRs) that are required to run DU workloads. Note In {ztp} v4.10 and earlier, you configure UEFI secure boot with a MachineConfig CR. This is no longer required in {ztp} v4.11 and later. In v4.11, you configure UEFI secure boot for single-node OpenShift clusters by updating the spec.clusters.nodes.bootMode field in the SiteConfig CR that you use to install the cluster. For more information, see Deploying a managed cluster with SiteConfig and {ztp} . 19.6.7.1. Operator namespaces and Operator groups Single-node OpenShift clusters that run DU workloads require the following OperatorGroup and Namespace custom resources (CRs): Local Storage Operator Logging Operator PTP Operator SR-IOV Network Operator The following YAML summarizes these CRs: Recommended Operator Namespace and OperatorGroup configuration apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator 19.6.7.2. Operator subscriptions Single-node OpenShift clusters that run DU workloads require the following Subscription CRs. The subscription provides the location to download the following Operators: Local Storage Operator Logging Operator PTP Operator SR-IOV Network Operator Recommended Operator subscriptions apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" 1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "stable" installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "stable" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 1 Specify the channel to get the Operator from. stable is the recommended channel. 2 Specify Manual or Automatic . In Automatic mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. In Manual mode, new Operator versions are installed only after they are explicitly approved. 19.6.7.3. Cluster logging and log forwarding Single-node OpenShift clusters that run DU workloads require logging and log forwarding for debugging. The following example YAML illustrates the required ClusterLogging and ClusterLogForwarder CRs. Recommended cluster logging and log forwarding configuration apiVersion: logging.openshift.io/v1 kind: ClusterLogging 1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: "curator" curator: schedule: "30 3 * * *" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder 2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test 3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open 1 Updates the existing ClusterLogging instance or creates the instance if it does not exist. 2 Updates the existing ClusterLogForwarder instance or creates the instance if it does not exist. 3 Specifies the URL of the Kafka server where the logs are forwarded to. 19.6.7.4. Performance profile Single-node OpenShift clusters that run DU workloads require a Node Tuning Operator performance profile to use real-time host capabilities and services. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. The following example PerformanceProfile CR illustrates the required cluster configuration. Recommended performance profile configuration apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile 1 spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" 2 cpu: isolated: 2-51,54-103 3 reserved: 0-1,52-53 4 hugepages: defaultHugepagesSize: 1G pages: - count: 32 5 size: 1G 6 node: 0 7 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: "restricted" realTimeKernel: enabled: true 8 1 Ensure that the value for name matches that specified in the spec.profile.data field of TunedPerformancePatch.yaml and the status.configuration.source.name field of validatorCRs/informDuValidator.yaml . 2 Configures UEFI secure boot for the cluster host. 3 Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. Important The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system. 4 Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. 5 Set the number of huge pages. 6 Set the huge page size. 7 Set node to the NUMA node where the hugepages are allocated. 8 Set enabled to true to install the real-time Linux kernel. 19.6.7.5. Configuring cluster time synchronization Run a one-time system time synchronization job for control plane or worker nodes. Recommended one time time-sync for control plane nodes ( 99-sync-time-once-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service Recommended one time time-sync for worker nodes ( 99-sync-time-once-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 19.6.7.6. PTP Single-node OpenShift clusters use Precision Time Protocol (PTP) for network time synchronization. The following example PtpConfig CR illustrates the required PTP configuration for ordinary clocks. The exact configuration you apply will depend on the node hardware and specific use case. Recommended PTP configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: profile: - name: "ordinary" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "ordinary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" 19.6.7.7. Extended Tuned profile Single-node OpenShift clusters that run DU workloads require additional performance tuning configurations necessary for high-performance workloads. The following example Tuned CR extends the Tuned profile: Recommended extended Tuned profile configuration apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-103 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch 19.6.7.8. SR-IOV Single root I/O virtualization (SR-IOV) is commonly used to enable the fronthaul and the midhaul networks. The following YAML example configures SR-IOV for a single-node OpenShift cluster. Recommended SR-IOV configuration apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: "" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 150 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 2 isRdma: false nicSelector: pfNames: - ens7f0 3 nodeSelector: node-role.kubernetes.io/master: "" numVfs: 8 4 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 140 5 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice 6 isRdma: true nicSelector: pfNames: - ens5f0 7 nodeSelector: node-role.kubernetes.io/master: "" numVfs: 8 8 priority: 10 resourceName: du_fh 1 Specifies the VLAN for the midhaul network. 2 Select either vfio-pci or netdevice , as needed. 3 Specifies the interface connected to the midhaul network. 4 Specifies the number of VFs for the midhaul network. 5 The VLAN for the fronthaul network. 6 Select either vfio-pci or netdevice , as needed. 7 Specifies the interface connected to the fronthaul network. 8 Specifies the number of VFs for the fronthaul network. 19.6.7.9. Console Operator The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads. The following Console custom resource (CR) example disables the console. Recommended console configuration apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "false" include.release.openshift.io/self-managed-high-availability: "false" include.release.openshift.io/single-node-developer: "false" release.openshift.io/create-only: "true" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal 19.6.7.10. Alertmanager Single-node OpenShift clusters that run DU workloads require reduced CPU resources consumed by the OpenShift Container Platform monitoring components. The following ConfigMap custom resource (CR) disables Alertmanager. Recommended cluster monitoring configuration apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false prometheusK8s: retention: 24h 19.6.7.11. Operator Lifecycle Manager Single-node OpenShift clusters that run distributed unit workloads require consistent access to CPU resources. Operator Lifecycle Manager (OLM) collects performance data from Operators at regular intervals, resulting in an increase in CPU utilisation. The following ConfigMap custom resource (CR) disables the collection of Operator performance data by OLM. Recommended cluster OLM configuration ( ReduceOLMFootprint.yaml ) apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True 19.6.7.12. Network diagnostics Single-node OpenShift clusters that run DU workloads require less inter-pod network connectivity checks to reduce the additional load created by these pods. The following custom resource (CR) disables these checks. Recommended network diagnostics configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: disableNetworkDiagnostics: true Additional resources Deploying far edge sites using ZTP 19.7. Validating single-node OpenShift cluster tuning for vDU application workloads Before you can deploy virtual distributed unit (vDU) applications, you need to tune and configure the cluster host firmware and various other cluster configuration settings. Use the following information to validate the cluster configuration to support vDU workloads. Additional resources For more information about single-node OpenShift clusters tuned for vDU application deployments, see Reference configuration for deploying vDUs on single-node OpenShift . 19.7.1. Recommended firmware configuration for vDU cluster hosts Use the following table as the basis to configure the cluster host firmware for vDU applications running on OpenShift Container Platform 4.12. Note The following table is a general recommendation for vDU cluster host firmware configuration. Exact firmware settings will depend on your requirements and specific hardware platform. Automatic setting of firmware is not handled by the zero touch provisioning pipeline. Table 19.8. Recommended cluster host firmware settings Firmware setting Configuration Description HyperTransport (HT) Enabled HyperTransport (HT) bus is a bus technology developed by AMD. HT provides a high-speed link between the components in the host memory and other system peripherals. UEFI Enabled Enable booting from UEFI for the vDU host. CPU Power and Performance Policy Performance Set CPU Power and Performance Policy to optimize the system for performance over energy efficiency. Uncore Frequency Scaling Disabled Disable Uncore Frequency Scaling to prevent the voltage and frequency of non-core parts of the CPU from being set independently. Uncore Frequency Maximum Sets the non-core parts of the CPU such as cache and memory controller to their maximum possible frequency of operation. Performance P-limit Disabled Disable Performance P-limit to prevent the Uncore frequency coordination of processors. Enhanced Intel(R) SpeedStep Tech Enabled Enable Enhanced Intel SpeedStep to allow the system to dynamically adjust processor voltage and core frequency that decreases power consumption and heat production in the host. Intel(R) Turbo Boost Technology Enabled Enable Turbo Boost Technology for Intel-based CPUs to automatically allow processor cores to run faster than the rated operating frequency if they are operating below power, current, and temperature specification limits. Intel Configurable TDP Enabled Enables Thermal Design Power (TDP) for the CPU. Configurable TDP Level Level 2 TDP level sets the CPU power consumption required for a particular performance rating. TDP level 2 sets the CPU to the most stable performance level at the cost of power consumption. Energy Efficient Turbo Disabled Disable Energy Efficient Turbo to prevent the processor from using an energy-efficiency based policy. Hardware P-States Enabled or Disabled Enable OS-controlled P-States to allow power saving configurations. Disable P-states (performance states) to optimize the operating system and CPU for performance over power consumption. Package C-State C0/C1 state Use C0 or C1 states to set the processor to a fully active state (C0) or to stop CPU internal clocks running in software (C1). C1E Disabled CPU Enhanced Halt (C1E) is a power saving feature in Intel chips. Disabling C1E prevents the operating system from sending a halt command to the CPU when inactive. Processor C6 Disabled C6 power-saving is a CPU feature that automatically disables idle CPU cores and cache. Disabling C6 improves system performance. Sub-NUMA Clustering Disabled Sub-NUMA clustering divides the processor cores, cache, and memory into multiple NUMA domains. Disabling this option can increase performance for latency-sensitive workloads. Note Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments. Note Enable both C-states and OS-controlled P-States to allow per pod power management. 19.7.2. Recommended cluster configurations to run vDU applications Clusters running virtualized distributed unit (vDU) applications require a highly tuned and optimized configuration. The following information describes the various elements that you require to support vDU workloads in OpenShift Container Platform 4.12 clusters. 19.7.2.1. Recommended cluster MachineConfig CRs Check that the MachineConfig custom resources (CRs) that you extract from the ztp-site-generate container are applied in the cluster. The CRs can be found in the extracted out/source-crs/extra-manifest/ folder. The following MachineConfig CRs from the ztp-site-generate container configure the cluster host: Table 19.9. Recommended MachineConfig CRs CR filename Description 02-workload-partitioning.yaml Configures workload partitioning for the cluster. Apply this MachineConfig CR when you install the cluster. 03-sctp-machine-config-master.yaml , 03-sctp-machine-config-worker.yaml Loads the SCTP kernel module. These MachineConfig CRs are optional and can be omitted if you do not require this kernel module. 01-container-mount-ns-and-kubelet-conf-master.yaml , 01-container-mount-ns-and-kubelet-conf-worker.yaml Configures the container mount namespace and Kubelet configuration. 04-accelerated-container-startup-master.yaml , 04-accelerated-container-startup-worker.yaml Configures accelerated startup for the cluster. 06-kdump-master.yaml , 06-kdump-worker.yaml Configures kdump for the cluster. Additional resources Extracting source CRs from the ztp-site-generate container 19.7.2.2. Recommended cluster Operators The following Operators are required for clusters running virtualized distributed unit (vDU) applications and are a part of the baseline reference configuration: Node Tuning Operator (NTO). NTO packages functionality that was previously delivered with the Performance Addon Operator, which is now a part of NTO. PTP Operator SR-IOV Network Operator Red Hat OpenShift Logging Operator Local Storage Operator 19.7.2.3. Recommended cluster kernel configuration Always use the latest supported real-time kernel version in your cluster. Ensure that you apply the following configurations in the cluster: Ensure that the following additionalKernelArgs are set in the cluster performance profile: spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" Ensure that the performance-patch profile in the Tuned CR configures the correct CPU isolation set that matches the isolated CPU set in the related PerformanceProfile CR, for example: spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name # And the cmdline_crash CPU set must match the 'isolated' set in the associated PerformanceProfile data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-103 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable 1 Listed CPUs depend on the host hardware configuration, specifically the number of available CPUs in the system and the CPU topology. 19.7.2.4. Checking the realtime kernel version Always use the latest version of the realtime kernel in your OpenShift Container Platform clusters. If you are unsure about the kernel version that is in use in the cluster, you can compare the current realtime kernel version to the release version with the following procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. You have installed podman . Procedure Run the following command to get the cluster version: USD OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') Get the release image SHA number: USD DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64) Run the release image container and extract the kernel version that is packaged with cluster's current release: USD podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##' Example output 4.18.0-305.49.1.rt7.121.el8_4.x86_64 This is the default realtime kernel version that ships with the release. Note The realtime kernel is denoted by the string .rt in the kernel version. Verification Check that the kernel version listed for the cluster's current release matches actual realtime kernel that is running in the cluster. Run the following commands to check the running realtime kernel version: Open a remote shell connection to the cluster node: USD oc debug node/<node_name> Check the realtime kernel version: sh-4.4# uname -r Example output 4.18.0-305.49.1.rt7.121.el8_4.x86_64 19.7.3. Checking that the recommended cluster configurations are applied You can check that clusters are running the correct configuration. The following procedure describes how to check the various configurations that you require to deploy a DU application in OpenShift Container Platform 4.12 clusters. Prerequisites You have deployed a cluster and tuned it for vDU workloads. You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Check that the default OperatorHub sources are disabled. Run the following command: USD oc get operatorhub cluster -o yaml Example output spec: disableAllDefaultSources: true Check that all required CatalogSource resources are annotated for workload partitioning ( PreferredDuringScheduling ) by running the following command: USD oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.target\.workload\.openshift\.io/management}{"\n"}{end}' Example output certified-operators -- {"effect": "PreferredDuringScheduling"} community-operators -- {"effect": "PreferredDuringScheduling"} ran-operators 1 redhat-marketplace -- {"effect": "PreferredDuringScheduling"} redhat-operators -- {"effect": "PreferredDuringScheduling"} 1 CatalogSource resources that are not annotated are also returned. In this example, the ran-operators CatalogSource resource is not annotated and does not have the PreferredDuringScheduling annotation. Note In a properly configured vDU cluster, only a single annotated catalog source is listed. Check that all applicable OpenShift Container Platform Operator namespaces are annotated for workload partitioning. This includes all Operators installed with core OpenShift Container Platform and the set of additional Operators included in the reference DU tuning configuration. Run the following command: USD oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.workload\.openshift\.io/allowed}{"\n"}{end}' Example output default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management Important Additional Operators must not be annotated for workload partitioning. In the output from the command, additional Operators should be listed without any value on the right side of the -- separator. Check that the ClusterLogging configuration is correct. Run the following commands: Validate that the appropriate input and output logs are configured: USD oc get -n openshift-logging ClusterLogForwarder instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: "2022-07-19T21:51:41Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "1030342" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open ... Check that the curation schedule is appropriate for your application: USD oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: "2022-07-07T18:22:56Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "235796" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed ... Check that the web console is disabled ( managementState: Removed ) by running the following command: USD oc get consoles.operator.openshift.io cluster -o jsonpath="{ .spec.managementState }" Example output Removed Check that chronyd is disabled on the cluster node by running the following commands: USD oc debug node/<node_name> Check the status of chronyd on the node: sh-4.4# chroot /host sh-4.4# systemctl status chronyd Example output ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5) Check that the PTP interface is successfully synchronized to the primary clock using a remote shell connection to the linuxptp-daemon container and the PTP Management Client ( pmc ) tool: Set the USDPTP_POD_NAME variable with the name of the linuxptp-daemon pod by running the following command: USD PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name) Run the following command to check the sync status of the PTP device: USD oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' Example output sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 Run the following pmc command to check the PTP clock status: USD oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP' Example output sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00 1 master_offset should be between -100 and 100 ns. 2 Indicates that the PTP clock is synchronized to a master, and the local clock is not the grandmaster clock. Check that the expected master offset value corresponding to the value in /var/run/ptp4l.0.config is found in the linuxptp-daemon-container log: USD oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container Example output phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533 Check that the SR-IOV configuration is correct by running the following commands: Check that the disableDrain value in the SriovOperatorConfig resource is set to true : USD oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath="{.spec.disableDrain}{'\n'}" Example output true Check that the SriovNetworkNodeState sync status is Succeeded by running the following command: USD oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath="{.items[*].status.syncStatus}{'\n'}" Example output Succeeded Verify that the expected number and configuration of virtual functions ( Vfs ) under each interface configured for SR-IOV is present and correct in the .status.interfaces field. For example: USD oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml Example output apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState ... status: interfaces: ... - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: "8086" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: "8086" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: "8086" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: "8086" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: "8086" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: "8086" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: "8086" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: "8086" vfID: 7 Check that the cluster performance profile is correct. The cpu and hugepages sections will vary depending on your hardware configuration. Run the following command: USD oc get PerformanceProfile openshift-node-performance-profile -o yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: "2022-07-19T21:51:31Z" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: "33558" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Available - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Upgradeable - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Progressing - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile Note CPU settings are dependent on the number of cores available on the server and should align with workload partitioning settings. hugepages configuration is server and application dependent. Check that the PerformanceProfile was successfully applied to the cluster by running the following command: USD oc get performanceprofile openshift-node-performance-profile -o jsonpath="{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\n'}{end}" Example output Available -- True Upgradeable -- True Progressing -- False Degraded -- False Check the Tuned performance patch settings by running the following command: USD oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml Example output apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: "2022-07-18T10:33:52Z" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: "34024" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch 1 The cpu list in cmdline=nohz_full= will vary based on your hardware configuration. Check that cluster networking diagnostics are disabled by running the following command: USD oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}' Example output true Check that the Kubelet housekeeping interval is tuned to slower rate. This is set in the containerMountNS machine config. Run the following command: USD oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION Example output Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Check that Grafana and alertManagerMain are disabled and that the Prometheus retention period is set to 24h by running the following command: USD oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath="{ .data.config\.yaml }" Example output grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h Use the following commands to verify that Grafana and alertManagerMain routes are not found in the cluster: USD oc get route -n openshift-monitoring alertmanager-main USD oc get route -n openshift-monitoring grafana Both queries should return Error from server (NotFound) messages. Check that there is a minimum of 4 CPUs allocated as reserved for each of the PerformanceProfile , Tuned performance-patch, workload partitioning, and kernel command line arguments by running the following command: USD oc get performanceprofile -o jsonpath="{ .items[0].spec.cpu.reserved }" Example output 0-3 Note Depending on your workload requirements, you might require additional reserved CPUs to be allocated. 19.8. Advanced managed cluster configuration with SiteConfig resources You can use SiteConfig custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time. 19.8.1. Customizing extra installation manifests in the ZTP GitOps pipeline You can define a set of extra manifests for inclusion in the installation phase of the zero touch provisioning (ZTP) GitOps pipeline. These manifests are linked to the SiteConfig custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig CRs at install time makes the installation process more efficient. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a set of extra manifest CRs that the ZTP pipeline uses to customize the cluster installs. In your custom /siteconfig directory, create an /extra-manifest folder for your extra manifests. The following example illustrates a sample /siteconfig with /extra-manifest folder: siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml └── extra-manifest └── 01-example-machine-config.yaml Add your custom extra manifest CRs to the siteconfig/extra-manifest directory. In your SiteConfig CR, enter the directory name in the extraManifestPath field, for example: clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" extraManifestPath: extra-manifest Save the SiteConfig CRs and /extra-manifest CRs and push them to the site configuration repo. The ZTP pipeline appends the CRs in the /extra-manifest directory to the default set of extra manifests during cluster provisioning. 19.8.2. Filtering custom resources using SiteConfig filters By using filters, you can easily customize SiteConfig custom resources (CRs) to include or exclude other CRs for use in the installation phase of the zero touch provisioning (ZTP) GitOps pipeline. You can specify an inclusionDefault value of include or exclude for the SiteConfig CR, along with a list of the specific extraManifest RAN CRs that you want to include or exclude. Setting inclusionDefault to include makes the ZTP pipeline apply all the files in /source-crs/extra-manifest during installation. Setting inclusionDefault to exclude does the opposite. You can exclude individual CRs from the /source-crs/extra-manifest folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml CR at installation time. Some additional optional filtering scenarios are also described. Prerequisites You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure To prevent the ZTP pipeline from applying the 03-sctp-machine-config-worker.yaml CR file, apply the following YAML in the SiteConfig CR: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.12" sshPublicKey: "<ssh_public_key>" clusters: - clusterName: "site1-sno-du" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml The ZTP pipeline skips the 03-sctp-machine-config-worker.yaml CR during installation. All other CRs in /source-crs/extra-manifest are applied. Save the SiteConfig CR and push the changes to the site configuration repository. The ZTP pipeline monitors and adjusts what CRs it applies based on the SiteConfig filter instructions. Optional: To prevent the ZTP pipeline from applying all the /source-crs/extra-manifest CRs during cluster installation, apply the following YAML in the SiteConfig CR: - clusterName: "site1-sno-du" extraManifests: filter: inclusionDefault: exclude Optional: To exclude all the /source-crs/extra-manifest RAN CRs and instead include a custom CR file during installation, edit the custom SiteConfig CR to set the custom manifests folder and the include file, for example: clusters: - clusterName: "site1-sno-du" extraManifestPath: "<custom_manifest_folder>" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml 1 Replace <custom_manifest_folder> with the name of the folder that contains the custom installation CRs, for example, user-custom-manifest/ . 2 Set inclusionDefault to exclude to prevent the ZTP pipeline from applying the files in /source-crs/extra-manifest during installation. The following example illustrates the custom folder structure: siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml 19.9. Advanced managed cluster configuration with PolicyGenTemplate resources You can use PolicyGenTemplate CRs to deploy custom functionality in your managed clusters. 19.9.1. Deploying additional changes to clusters If you require cluster configuration changes outside of the base GitOps ZTP pipeline configuration, there are three options: Apply the additional configuration after the ZTP pipeline is complete When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget. Add content to the ZTP library The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required. Create extra manifests for the cluster installation Extra manifests are applied during installation and make the installation process more efficient. Important Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OpenShift Container Platform. Additional resources Customizing extra installation manifests in the ZTP GitOps pipeline 19.9.2. Using PolicyGenTemplate CRs to override source CRs content PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate container. You can think of PolicyGenTemplate CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR. The following example procedure describes how to update fields in the generated PerformanceProfile CR for the reference configuration based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate based on your requirements. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. Procedure Review the baseline source CR for existing content. You can review the source CRs listed in the reference PolicyGenTemplate CRs by extracting them from the zero touch provisioning (ZTP) container. Create an /out folder: USD mkdir -p ./out Extract the source CRs: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 extract /home/ztp --tar | tar x -C ./out Review the baseline PerformanceProfile CR in ./out/source-crs/PerformanceProfile.yaml : apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true Note Any fields in the source CR which contain USD... are removed from the generated CR if they are not provided in the PolicyGenTemplate CR. Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file. The following example PolicyGenTemplate CR stanza supplies appropriate CPU specifications, sets the hugepages configuration, and adds a new field that sets globallyDisableIrqLoadBalancing to false. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: "2-19,22-39" reserved: "0-1,20-21" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application. Example output The ZTP application generates an RHACM policy that contains the generated PerformanceProfile CR. The contents of that CR are derived by merging the metadata and spec contents from the PerformanceProfile entry in the PolicyGenTemplate onto the source CR. The resulting CR has the following content: --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In the /source-crs folder that you extract from the ztp-site-generate container, the USD syntax is not used for template substitution as implied by the syntax. Rather, if the policyGen tool sees the USD prefix for a string and you do not specify a value for that field in the related PolicyGenTemplate CR, the field is omitted from the output CR entirely. An exception to this is the USDmcp variable in /source-crs YAML files that is substituted with the specified value for mcp from the PolicyGenTemplate CR. For example, in example/policygentemplates/group-du-standard-ranGen.yaml , the value for mcp is worker : spec: bindingRules: group-du-standard: "" mcp: "worker" The policyGen tool replace instances of USDmcp with worker in the output CRs. 19.9.3. Adding custom content to the GitOps ZTP pipeline Perform the following procedure to add new content to the ZTP pipeline. Procedure Create a subdirectory named source-crs in the directory that contains the kustomization.yaml file for the PolicyGenTemplate custom resource (CR). Add your custom CRs to the source-crs subdirectory, as shown in the following example: example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml 1 The source-crs subdirectory must be in the same directory as the kustomization.yaml file. Important To use your own resources, ensure that the custom CR names differ from the default source CRs provided in the ZTP container. Update the required PolicyGenTemplate CRs to include references to the content you added in the source-crs/custom-crs and source-crs/elasticsearch directories. For example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-dev" namespace: "ztp-clusters" spec: bindingRules: dev: "true" mcp: "master" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: "group-dev-cluster-log-ns" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: "group-dev-cluster-log-operator-group" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: "group-dev-cluster-log-sub" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: "group-dev-lso-ns" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: "group-dev-lso-operator-group" - fileName: StorageSubscription.yaml remediationAction: inform policyName: "group-dev-lso-sub" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: "group-dev-pao-ns" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: "group-dev-pao-cat-source" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: "group-dev-pao-sub" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: "group-dev-elasticsearch-ns" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: "group-dev-elasticsearch-operator-group" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: "group-dev-apiserver-config" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: "group-dev-disable-nic-lldp" 1 2 Set the fileName field to include the relative path to the file from the /source-crs parent directory. Commit the PolicyGenTemplate change in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application. Update the ClusterGroupUpgrade CR to include the changed PolicyGenTemplate and save it as cgu-test.yaml . The following example shows a generated cgu-test.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated ClusterGroupUpgrade CR by running the following command: USD oc apply -f cgu-test.yaml Verification Check that the updates have succeeded by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies 19.9.4. Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances. You can override the default policy evaluation intervals with PolicyGenTemplate custom resources (CRs). You configure duration settings that define how long a ConfigurationPolicy CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies. The zero touch provisioning (ZTP) policy generator generates ConfigurationPolicy CR policies with pre-defined policy evaluation intervals. The default value for the noncompliant state is 10 seconds. The default value for the compliant state is 10 minutes. To disable the evaluation interval, set the value to never . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the evaluation interval for all policies in a PolicyGenTemplate CR, add evaluationInterval to the spec field, and then set the appropriate compliant and noncompliant values. For example: spec: evaluationInterval: compliant: 30m noncompliant: 20s To configure the evaluation interval for the spec.sourceFiles object in a PolicyGenTemplate CR, add evaluationInterval to the sourceFiles field, for example: spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: "sriov-sub-policy" evaluationInterval: compliant: never noncompliant: 10s Commit the PolicyGenTemplate CRs files in the Git repository and push your changes. Verification Check that the managed spoke cluster policies are monitored at the expected intervals. Log in as a user with cluster-admin privileges on the managed cluster. Get the pods that are running in the open-cluster-management-agent-addon namespace. Run the following command: USD oc get pods -n open-cluster-management-agent-addon Example output NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d Check the applied policies are being evaluated at the expected interval in the logs for the config-policy-controller pod: USD oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb Example output 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"} 19.9.5. Signalling ZTP cluster deployment completion with validator inform policies Create a validator inform policy that signals when the zero touch provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters. Procedure Create a standalone PolicyGenTemplate custom resource (CR) that contains the source file validatorCRs/informDuValidator.yaml . You only need one standalone PolicyGenTemplate CR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters: Example single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml) apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno-validator" 1 namespace: "ztp-group" 2 spec: bindingRules: group-du-sno: "" 3 bindingExcludedRules: ztp-done: "" 4 mcp: "master" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: "du-policy" 7 1 The name of PolicyGenTemplates object. This name is also used as part of the names for the placementBinding , placementRule , and policy that are created in the requested namespace . 2 This value should match the namespace used in the group PolicyGenTemplates . 3 The group-du-* label defined in bindingRules must exist in the SiteConfig files. 4 The label defined in bindingExcludedRules must be`ztp-done:`. The ztp-done label is used in coordination with the Topology Aware Lifecycle Manager. 5 mcp defines the MachineConfigPool object that is used in the source file validatorCRs/informDuValidator.yaml . It should be master for single node and three-node cluster deployments and worker for standard cluster deployments. 6 Optional. The default value is inform . 7 This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is group-du-sno-validator-du-policy . Commit the PolicyGenTemplate CR file in your Git repository and push the changes. Additional resources Upgrading GitOps ZTP Preparing the GitOps ZTP site configuration repository 19.9.6. Configuring PTP events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure PTP events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 19.9.6.1. Configuring PTP events that use HTTP transport You can configure PTP events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the transport host: - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043 Note In OpenShift Container Platform 4.12 or later, you do not need to set the transportHost field in the PtpOperatorConfig resource when you use HTTP transport with PTP events. Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be one of PtpConfigMaster.yaml , PtpConfigSlave.yaml , or PtpConfigSlaveCvl.yaml depending on your requirements. PtpConfigSlaveCvl.yaml configures linuxptp services for an Intel E810 Columbiaville NIC. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Using PolicyGenTemplate CRs to override source CRs content 19.9.6.2. Configuring PTP events that use AMQP transport You can configure PTP events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Add the following YAML into .spec.sourceFiles in the common-ranGen.yaml file to configure the AMQP Operator: #AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the AMQ transport host to the config-policy : - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: "amqp://amq-router.amq-router.svc.cluster.local" Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be one PtpConfigMaster.yaml , PtpConfigSlave.yaml , or PtpConfigSlaveCvl.yaml depending on your requirements. PtpConfigSlaveCvl.yaml configures linuxptp services for an Intel E810 Columbiaville NIC. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Apply the following PolicyGenTemplate changes to your specific site YAML files, for example, example-sno-site.yaml : In .sourceFiles , add the Interconnect CR file that configures the AMQ router to the config-policy : - fileName: AmqInstance.yaml policyName: "config-policy" Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Installing the AMQ messaging bus 19.9.7. Configuring bare-metal events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure bare-metal events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 19.9.7.1. Configuring bare-metal events that use HTTP transport You can configure bare-metal events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Configure the Bare Metal Event Relay Operator by adding the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml 1 policyName: "config-policy" spec: nodeSelector: {} transportHost: "http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043" logLevel: "info" 1 Each baseboard management controller (BMC) requires a single HardwareEvent CR only. Note In OpenShift Container Platform 4.12 or later, you do not need to set the transportHost field in the HardwareEvent custom resource (CR) when you use HTTP transport with bare-metal events. Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy bare-metal events to new sites with GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" Additional resources Installing the Bare Metal Event Relay using the CLI Creating the bare-metal event and Secret CRs 19.9.7.2. Configuring bare-metal events that use AMQP transport You can configure bare-metal events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" # Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the Interconnect CR to .spec.sourceFiles in the site configuration file, for example, the example-sno-site.yaml file: - fileName: AmqInstance.yaml policyName: "config-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml policyName: "config-policy" spec: nodeSelector: {} transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local" 1 logLevel: "info" 1 The transportHost URL is composed of the existing AMQ Interconnect CR name and namespace . For example, in transportHost: "amqp://amq-router.amq-router.svc.cluster.local" , the AMQ Interconnect name and namespace are both set to amq-router . Note Each baseboard management controller (BMC) requires a single HardwareEvent resource only. Commit the PolicyGenTemplate change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" 19.9.8. Configuring the Image Registry Operator for local caching of images OpenShift Container Platform manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times. Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the /var/lib/containers/storage directory in the case of an unexpected shutdown. To address long image download times, you can create a local image registry on remote managed clusters using GitOps ZTP. This is useful in Edge computing scenarios where clusters are deployed at the far edge of the network. Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the SiteConfig CR that you use to install the remote managed cluster. After installation, you configure the local image registry using a PolicyGenTemplate CR. Then, the ZTP pipeline creates Persistent Volume (PV) and Persistent Volume Claim (PVC) CRs and patches the imageregistry configuration. Note The local image registry can only be used for user application images and cannot be used for the OpenShift Container Platform or Operator Lifecycle Manager operator images. Additional resources OpenShift Container Platform registry overview 19.9.8.1. Configuring disk partitioning with SiteConfig Configure disk partitioning for a managed cluster using a SiteConfig CR and GitOps Zero Touch Provisioning (ZTP). The disk partition details in the SiteConfig CR must match the underlying disk. Important You must complete this procedure at installation time. Prerequisites Install Butane. Procedure Create the storage.bu file by using the following example YAML file: variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation fails. 3 Specify the size of the partition. If the value is too small, the deployments fails. Convert the storage.bu file to an Ignition file by running the following command: USD butane storage.bu Example output {"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}} Use a tool such as JSON Pretty Print to convert the output into JSON format. Copy the output into the .spec.clusters.nodes.ignitionConfigOverride field in the SiteConfig CR: [...] spec: clusters: - nodes: - ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } [...] Note If the .spec.clusters.nodes.ignitionConfigOverride field does not exist, create it. Verification During or after installation, verify on the hub cluster that the BareMetalHost object shows the annotation by running the following command: USD oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"] Example output "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}" After installation, check the single-node OpenShift disk status: Enter into a debug session on the single-node OpenShift node by running the following command. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-sno-node Set /host as the root directory within the debug shell by running the following command. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host List information about all available block devices by running the following command: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers Display information about the file system disk space usage by running the following command: # df -h Example output Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000 19.9.8.2. Configuring the image registry using PolicyGenTemplate CRs Use PolicyGenTemplate (PGT) CRs to apply the CRs required to configure the image registry and patch the imageregistry configuration. Prerequisites You have configured a disk partition in the managed cluster. You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP). Procedure Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate PolicyGenTemplate CR. For example, to configure an individual site, add the following YAML to the file example-sno-site.yaml : sourceFiles: # storage class - fileName: StorageClass.yaml policyName: "sc-for-image-registry" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: "100" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: "pvc-for-image-registry" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: "pv-for-image-registry" metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" - fileName: ImageRegistryConfig.yaml policyName: "config-for-image-registry" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: storage: pvc: claim: "image-registry-pvc" 1 Set the appropriate value for ztp-deploy-wave depending on whether you are configuring image registries at the site, common, or group level. ztp-deploy-wave: "100" is suitable for development or testing because it allows you to group the referenced source files together. 2 In ImageRegistryPV.yaml , ensure that the spec.local.path field is set to /var/imageregistry to match the value set for the mount_point field in the SiteConfig CR. Important Do not set complianceType: mustonlyhave for the - fileName: ImageRegistryConfig.yaml configuration. This can cause the registry pod deployment to fail. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. Verification Use the following steps to troubleshoot errors with the local image registry on the managed clusters: Verify successful login to the registry while logged in to the managed cluster. Run the following commands: Export the managed cluster name: USD cluster=<managed_cluster_name> Get the managed cluster kubeconfig details: USD oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster Download and export the cluster kubeconfig : USD oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster Verify access to the image registry from the managed cluster. See "Accessing the registry". Check that the Config CRD in the imageregistry.operator.openshift.io group instance is not reporting errors. Run the following command while logged in to the managed cluster: USD oc get image.config.openshift.io cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2021-10-08T19:02:39Z" generation: 5 name: cluster resourceVersion: "688678648" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice Check that the PersistentVolumeClaim on the managed cluster is populated with data. Run the following command while logged in to the managed cluster: USD oc get pv image-registry-sc Check that the registry* pod is running and is located under the openshift-image-registry namespace. USD oc get pods -n openshift-image-registry | grep registry* Example output cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d Check that the disk partition on the managed cluster is correct: Open a debug shell to the managed cluster: USD oc debug node/sno-1.example.com Run lsblk to check the host disk partitions: sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom 1 /var/imageregistry indicates that the disk is correctly partitioned. Additional resources Accessing the registry 19.9.9. Using hub templates in PolicyGenTemplate CRs Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps ZTP. Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values. Important Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created. The following supported hub template functions are available for use in GitOps ZTP with TALM: fromConfigmap returns the value of the provided data key in the named ConfigMap resource. Note There is a 1 MiB size limit for ConfigMap CRs. The effective size for ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap : argocd.argoproj.io/sync-options: Replace=true base64enc returns the base64-encoded value of the input string base64dec returns the decoded value of the base64-encoded input string indent returns the input string with added indent spaces autoindent returns the input string with added indent spaces based on the spacing used in the parent template toInt casts and returns the integer value of the input value toBool converts the input string into a boolean value, and returns the boolean Various Open source community functions are also available for use with GitOps ZTP. Additional resources RHACM support for hub cluster templates in configuration policies 19.9.9.1. Example hub templates The following code examples are valid hub templates. Each of these templates return values from the ConfigMap CR with the name test-config in the default namespace. Returns the value with the key common-key : {{hub fromConfigMap "default" "test-config" "common-key" hub}} Returns a string by using the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}} Casts and returns a boolean value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}} Casts and returns an integer value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}} 19.9.9.2. Specifying host NICs in site PolicyGenTemplate CRs with hub cluster templates You can manage host NICs in a single ConfigMap CR and use hub cluster templates to populate the custom NIC values in the generated polices that get applied to the cluster hosts. Using hub cluster templates in site PolicyGenTemplate (PGT) CRs means that you do not need to create multiple single site PGT CRs for each site. The following example shows you how to use a single ConfigMap CR to manage cluster host NICs and apply them to the cluster as polices by using a single PolicyGenTemplate site CR. Note When you use the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application. Procedure Create a ConfigMap resource that describes the NICs for a group of hosts. For example: apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: "8" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: "10" example-sno-du_fh-vlan: "140" example-sno-du_mh-numVfs: "8" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: "10" example-sno-du_mh-vlan: "150" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Note The ConfigMap must be in the same namespace with the policy that has the hub template substitution. Commit the ConfigMap CR in Git, and then push to the Git repository being monitored by the Argo CD application. Create a site PGT CR that uses templates to pull the required data from the ConfigMap object. For example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "site" namespace: "ztp-site" spec: remediationAction: inform bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" spec: resourceName: du_fh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-pf" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-pf" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_mh Commit the site PolicyGenTemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". 19.9.9.3. Specifying VLAN IDs in group PolicyGenTemplate CRs with hub cluster templates You can manage VLAN IDs for managed clusters in a single ConfigMap CR and use hub cluster templates to populate the VLAN IDs in the generated polices that get applied to the clusters. The following example shows how you how manage VLAN IDs in single ConfigMap CR and apply them in individual cluster polices by using a single PolicyGenTemplate group CR. Note When using the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a ConfigMap CR that describes the VLAN IDs for a group of cluster hosts. For example: apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: "101" site-2-vlan: "234" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Note The ConfigMap must be in the same namespace with the policy that has the hub template substitution. Commit the ConfigMap CR in Git, and then push to the Git repository being monitored by the Argo CD application. Create a group PGT CR that uses a hub template to pull the required VLAN IDs from the ConfigMap object. For example, add the following YAML snippet to the group PGT CR: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data" (printf "%s-vlan" .ManagedClusterName) | toInt hub}}' Commit the group PolicyGenTemplate CR in Git, and then push to the Git repository being monitored by the Argo CD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". 19.9.9.4. Syncing new ConfigMap changes to existing PolicyGenTemplate CRs Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a PolicyGenTemplate CR that pulls information from a ConfigMap CR using hub cluster templates. Procedure Update the contents of your ConfigMap CR, and apply the changes in the hub cluster. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following: Option 1: Delete the existing policy. ArgoCD uses the PolicyGenTemplate CR to immediately recreate the deleted policy. For example, run the following command: USD oc delete policy <policy_name> -n <policy_namespace> Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap . For example: USD oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1" Note You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing . Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example: USD oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace> Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml : apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated policy: USD oc apply -f cgr-example.yaml 19.10. Updating managed clusters with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of multiple clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. 19.10.1. About the Topology Aware Lifecycle Manager configuration The Topology Aware Lifecycle Manager (TALM) manages the deployment of Red Hat Advanced Cluster Management (RHACM) policies for one or more OpenShift Container Platform clusters. Using TALM in a large network of clusters allows the phased rollout of policies to the clusters in limited batches. This helps to minimize possible service disruptions when updating. With TALM, you can control the following actions: The timing of the update The number of RHACM-managed clusters The subset of managed clusters to apply the policies to The update order of the clusters The set of policies remediated to the cluster The order of policies remediated to the cluster The assignment of a canary cluster For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) offers the following features: Create a backup of a deployment before an upgrade Pre-caching images for clusters with limited bandwidth TALM supports the orchestration of the OpenShift Container Platform y-stream and z-stream updates, and day-two operations on y-streams and z-streams. 19.10.2. About managed policies used with Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) uses RHACM policies for cluster updates. TALM can be used to manage the rollout of any policy CR where the remediationAction field is set to inform . Supported use cases include the following: Manual user creation of policy CRs Automatically generated policies from the PolicyGenTemplate custom resource definition (CRD) For policies that update an Operator subscription with manual approval, TALM provides additional functionality that approves the installation of the updated Operator. For more information about managed policies, see Policy Overview in the RHACM documentation. For more information about the PolicyGenTemplate CRD, see the "About the PolicyGenTemplate CRD" section in "Configuring managed clusters with policies and PolicyGenTemplate resources". 19.10.3. Installing the Topology Aware Lifecycle Manager by using the web console You can use the OpenShift Container Platform web console to install the Topology Aware Lifecycle Manager. Prerequisites Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected regitry. Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Topology Aware Lifecycle Manager from the list of available Operators, and then click Install . Keep the default selection of Installation mode ["All namespaces on the cluster (default)"] and Installed Namespace ("openshift-operators") to ensure that the Operator is installed properly. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the All Namespaces namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any containers in the cluster-group-upgrades-controller-manager pod that are reporting issues. 19.10.4. Installing the Topology Aware Lifecycle Manager by using the CLI You can use the OpenShift CLI ( oc ) to install the Topology Aware Lifecycle Manager (TALM). Prerequisites Install the OpenShift CLI ( oc ). Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected registry. Log in as a user with cluster-admin privileges. Procedure Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, talm-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: "stable" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f talm-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.12.x Topology Aware Lifecycle Manager 4.12.x Succeeded Verify that the TALM is up and running: USD oc get deploy -n openshift-operators Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s 19.10.5. About the ClusterGroupUpgrade CR The Topology Aware Lifecycle Manager (TALM) builds the remediation plan from the ClusterGroupUpgrade CR for a group of clusters. You can define the following specifications in a ClusterGroupUpgrade CR: Clusters in the group Blocking ClusterGroupUpgrade CRs Applicable list of managed policies Number of concurrent updates Applicable canary updates Actions to perform before and after the update Update timing You can control the start time of an update using the enable field in the ClusterGroupUpgrade CR. For example, if you have a scheduled maintenance window of four hours, you can prepare a ClusterGroupUpgrade CR with the enable field set to false . You can set the timeout by configuring the spec.remediationStrategy.timeout setting as follows: spec remediationStrategy: maxConcurrency: 1 timeout: 240 You can use the batchTimeoutAction to determine what happens if an update fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or abort to stop policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. To apply the changes, you set the enabled field to true . For more information see the "Applying update policies to managed clusters" section. As TALM works through remediation of the policies to the specified clusters, the ClusterGroupUpgrade CR can report true or false statuses for a number of conditions. Note After TALM completes a cluster update, the cluster does not update again under the control of the same ClusterGroupUpgrade CR. You must create a new ClusterGroupUpgrade CR in the following cases: When you need to update the cluster again When the cluster changes to non-compliant with the inform policy after being updated 19.10.5.1. Selecting clusters TALM builds a remediation plan and selects clusters based on the following fields: The clusterLabelSelector field specifies the labels of the clusters that you want to update. This consists of a list of the standard label selectors from k8s.io/apimachinery/pkg/apis/meta/v1 . Each selector in the list uses either label value pairs or label expressions. Matches from each selector are added to the final list of clusters along with the matches from the clusterSelector field and the cluster field. The clusters field specifies a list of clusters to update. The canaries field specifies the clusters for canary updates. The maxConcurrency field specifies the number of clusters to update in a batch. You can use the clusters , clusterLabelSelector , and clusterSelector fields together to create a combined list of clusters. The remediation plan starts with the clusters listed in the canaries field. Each canary cluster forms a single-cluster batch. Sample ClusterGroupUpgrade CR with the enabled field set to false apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: 1 - spoke1 enable: false 2 managedPolicies: 3 - talm-policy preCaching: false remediationStrategy: 4 canaries: 5 - spoke1 maxConcurrency: 2 6 timeout: 240 clusterLabelSelectors: 7 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 8 status: 9 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 10 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 11 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: 1 Defines the list of clusters to update. 2 The enable field is set to false . 3 Lists the user-defined set of policies to remediate. 4 Defines the specifics of the cluster updates. 5 Defines the clusters for canary updates. 6 Defines the maximum number of concurrent updates in a batch. The number of remediation batches is the number of canary clusters, plus the number of clusters, except the canary clusters, divided by the maxConcurrency value. The clusters that are already compliant with all the managed policies are excluded from the remediation plan. 7 Displays the parameters for selecting clusters. 8 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . 9 Displays information about the status of the updates. 10 The ClustersSelected condition shows that all selected clusters are valid. 11 The Validated condition shows that all selected clusters have been validated. Note Any failures during the update of a canary cluster stops the update process. When the remediation plan is successfully created, you can you set the enable field to true and TALM starts to update the non-compliant clusters with the specified managed policies. Note You can only make changes to the spec fields if the enable field of the ClusterGroupUpgrade CR is set to false . 19.10.5.2. Validating TALM checks that all specified managed policies are available and correct, and uses the Validated condition to report the status and reasons as follows: true Validation is completed. false Policies are missing or invalid, or an invalid platform image has been specified. 19.10.5.3. Pre-caching Clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. On single-node OpenShift clusters, you can use pre-caching to avoid this. The container image pre-caching starts when you create a ClusterGroupUpgrade CR with the preCaching field set to true . TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. For more information see the "Using the container image pre-cache feature" section. 19.10.5.4. Creating a backup For single-node OpenShift, TALM can create a backup of a deployment before an update. If the update fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a ClusterGroupUpgrade CR with the backup field set to true . To ensure that the contents of the backup are up to date, the backup is not taken until you set the enable field in the ClusterGroupUpgrade CR to true . TALM uses the BackupSucceeded condition to report the status and reasons as follows: true Backup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update fails for that cluster but proceeds for all other clusters. false Backup is still in progress for one or more clusters or has failed for all clusters. For more information, see the "Creating a backup of cluster resources before upgrade" section. 19.10.5.5. Updating clusters TALM enforces the policies following the remediation plan. Enforcing the policies for subsequent batches starts immediately after all the clusters of the current batch are compliant with all the managed policies. If the batch times out, TALM moves on to the batch. The timeout value of a batch is the spec.timeout field divided by the number of batches in the remediation plan. TALM uses the Progressing condition to report the status and reasons as follows: true TALM is remediating non-compliant policies. false The update is not in progress. Possible reasons for this are: All clusters are compliant with all the managed policies. The update has timed out as policy remediation took too long. Blocking CRs are missing from the system or have not yet completed. The ClusterGroupUpgrade CR is not enabled. Backup is still in progress. Note The managed policies apply in the order that they are listed in the managedPolicies field in the ClusterGroupUpgrade CR. One managed policy is applied to the specified clusters at a time. When a cluster complies with the current policy, the managed policy is applied to it. Sample ClusterGroupUpgrade CR in the Progressing state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 1 The Progressing fields show that TALM is in the process of remediating policies. 19.10.5.6. Update status TALM uses the Succeeded condition to report the status and reasons as follows: true All clusters are compliant with the specified managed policies. false Policy remediation failed as there were no clusters available for remediation, or because policy remediation took too long for one of the following reasons: The current batch contains canary updates and the cluster in the batch does not comply with all the managed policies within the batch timeout. Clusters did not comply with the managed policies within the timeout value specified in the remediationStrategy field. Sample ClusterGroupUpgrade CR in the Succeeded state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: "True" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: "True" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: "False" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: "True" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 2 In the Progressing fields, the status is false as the update has completed; clusters are compliant with all the managed policies. 3 The Succeeded fields show that the validations completed successfully. 1 The status field includes a list of clusters and their respective statuses. The status of a cluster can be complete or timedout . Sample ClusterGroupUpgrade CR in the timedout state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z' 1 If a cluster's state is timedout , the currentPolicy field shows the name of the policy and the policy status. 2 The status for succeeded is false and the message indicates that policy remediation took too long. 19.10.5.7. Blocking ClusterGroupUpgrade CRs You can create multiple ClusterGroupUpgrade CRs and control their order of application. For example, if you create ClusterGroupUpgrade CR C that blocks the start of ClusterGroupUpgrade CR A, then ClusterGroupUpgrade CR A cannot start until the status of ClusterGroupUpgrade CR C becomes UpgradeComplete . One ClusterGroupUpgrade CR can have multiple blocking CRs. In this case, all the blocking CRs must complete before the upgrade for the current CR can start. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the content of the ClusterGroupUpgrade CRs in the cgu-a.yaml , cgu-b.yaml , and cgu-c.yaml files. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 1 Defines the blocking CRs. The cgu-a update cannot start until cgu-c is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 The cgu-b update cannot start until cgu-a is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {} 1 The cgu-c update does not have any blocking CRs. TALM starts the cgu-c update when the enable field is set to true . Create the ClusterGroupUpgrade CRs by running the following command for each relevant CR: USD oc apply -f <name>.yaml Start the update process by running the following command for each relevant CR: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> \ --type merge -p '{"spec":{"enable":true}}' The following examples show ClusterGroupUpgrade CRs where the enable field is set to true : Example for cgu-a with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {} 1 Shows the list of blocking CRs. Example for cgu-b with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 Shows the list of blocking CRs. Example for cgu-c with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0 1 The cgu-c update does not have any blocking CRs. 19.10.6. Update policies on managed clusters The Topology Aware Lifecycle Manager (TALM) remediates a set of inform policies for the clusters specified in the ClusterGroupUpgrade CR. TALM remediates inform policies by making enforce copies of the managed RHACM policies. Each copied policy has its own corresponding RHACM placement rule and RHACM placement binding. One by one, TALM adds each cluster from the current batch to the placement rule that corresponds with the applicable managed policy. If a cluster is already compliant with a policy, TALM skips applying that policy on the compliant cluster. TALM then moves on to applying the policy to the non-compliant cluster. After TALM completes the updates in a batch, all clusters are removed from the placement rules associated with the copied policies. Then, the update of the batch starts. If a spoke cluster does not report any compliant state to RHACM, the managed policies on the hub cluster can be missing status information that TALM needs. TALM handles these cases in the following ways: If a policy's status.compliant field is missing, TALM ignores the policy and adds a log entry. Then, TALM continues looking at the policy's status.status field. If a policy's status.status is missing, TALM produces an error. If a cluster's compliance status is missing in the policy's status.status field, TALM considers that cluster to be non-compliant with that policy. The ClusterGroupUpgrade CR's batchTimeoutAction determines what happens if an upgrade fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or specify abort to stop the policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. Example upgrade policy apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.12.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.12 desiredUpdate: version: 4.4.12.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.12.4 remediationAction: inform severity: low remediationAction: inform For more information about RHACM policies, see Policy overview . Additional resources For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD . 19.10.6.1. Configuring Operator subscriptions for managed clusters that you install with TALM Topology Aware Lifecycle Manager (TALM) can only approve the install plan for an Operator if the Subscription custom resource (CR) of the Operator contains the status.state.AtLatestKnown field. Procedure Add the status.state.AtLatestKnown field to the Subscription CR of the Operator: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1 1 The status.state: AtLatestKnown field is used for the latest Operator version available from the Operator catalog. Note When a new version of the Operator is available in the registry, the associated policy becomes non-compliant. Apply the changed Subscription policy to your managed clusters with a ClusterGroupUpgrade CR. 19.10.6.2. Applying update policies to managed clusters You can update your managed clusters by applying your policies. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the contents of the ClusterGroupUpgrade CR in the cgu-1.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5 1 The name of the policies to apply. 2 The list of clusters to update. 3 The maxConcurrency field signifies the number of clusters updated at the same time. 4 The update timeout in minutes. 5 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-1.yaml Check if the ClusterGroupUpgrade CR was created in the hub cluster by running the following command: USD oc get cgu --all-namespaces Example output NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled Check the status of the update by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Not enabled", 1 "reason": "NotEnabled", "status": "False", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": {} } 1 The spec.enable field in the ClusterGroupUpgrade CR is set to false . Check the status of the policies by running the following command: USD oc get policies -A Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m 1 The spec.remediationAction field of policies currently applied on the clusters is set to enforce . The managed policies in inform mode from the ClusterGroupUpgrade CR remain in inform mode during the update. Change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the update again by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ 1 { "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "All selected clusters are valid", "reason": "ClusterSelectionCompleted", "status": "True", "type": "ClustersSelected", "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "Completed validation", "reason": "ValidationCompleted", "status": "True", "type": "Validated", "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": { "currentBatch": 1, "currentBatchStartedAt": "2022-02-25T15:54:16Z", "remediationPlanForBatch": { "spoke1": 0, "spoke2": 1 }, "startedAt": "2022-02-25T15:54:16Z" } } 1 Reflects the update progress of the current batch. Run this command again to receive updated information about the progress. If the policies include Operator subscriptions, you can check the installation progress directly on the single-node cluster. Export the KUBECONFIG file of the single-node cluster you want to check the installation progress for by running the following command: USD export KUBECONFIG=<cluster_kubeconfig_absolute_path> Check all the subscriptions present on the single-node cluster and look for the one in the policy you are trying to install through the ClusterGroupUpgrade CR by running the following command: USD oc get subs -A | grep -i <subscription_name> Example output for cluster-logging policy NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable If one of the managed policies includes a ClusterVersion CR, check the status of platform updates in the current batch by running the following command against the spoke cluster: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.12.5 True True 43s Working towards 4.4.12.7: 71 of 735 done (9% complete) Check the Operator subscription by running the following command: USD oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath="{.status}" Check the install plans present on the single-node cluster that is associated with the desired subscription by running the following command: USD oc get installplan -n <subscription_namespace> Example output for cluster-logging Operator NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1 1 The install plans have their Approval field set to Manual and their Approved field changes from false to true after TALM approves the install plan. Note When TALM is remediating a policy containing a subscription, it automatically approves any install plans attached to that subscription. Where multiple install plans are needed to get the operator to the latest known version, TALM might approve multiple install plans, upgrading through one or more intermediate versions to get to the final version. Check if the cluster service version for the Operator of the policy that the ClusterGroupUpgrade is installing reached the Succeeded phase by running the following command: USD oc get csv -n <operator_namespace> Example output for OpenShift Logging Operator NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded 19.10.7. Creating a backup of cluster resources before upgrade For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) can create a backup of a deployment before an upgrade. If the upgrade fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a ClusterGroupUpgrade CR with the backup field set to true . To ensure that the contents of the backup are up to date, the backup is not taken until you set the enable field in the ClusterGroupUpgrade CR to true . TALM uses the BackupSucceeded condition to report the status and reasons as follows: true Backup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update does not proceed for that cluster. false Backup is still in progress for one or more clusters or has failed for all clusters. The backup process running in the spoke clusters can have the following statuses: PreparingToStart The first reconciliation pass is in progress. The TALM deletes any spoke backup namespace and hub view resources that have been created in a failed upgrade attempt. Starting The backup prerequisites and backup job are being created. Active The backup is in progress. Succeeded The backup succeeded. BackupTimeout Artifact backup is partially done. UnrecoverableError The backup has ended with a non-zero exit code. Note If the backup of a cluster fails and enters the BackupTimeout or UnrecoverableError state, the cluster update does not proceed for that cluster. Updates to other clusters are not affected and continue. 19.10.7.1. Creating a ClusterGroupUpgrade CR with backup You can create a backup of a deployment before an upgrade on single-node OpenShift clusters. If the upgrade fails you can use the upgrade-recovery.sh script generated by Topology Aware Lifecycle Manager (TALM) to return the system to its preupgrade state. The backup consists of the following items: Cluster backup A snapshot of etcd and static pod manifests. Content backup Backups of folders, for example, /etc , /usr/local , /var/lib/kubelet . Changed files backup Any files managed by machine-config that have been changed. Deployment A pinned ostree deployment. Images (Optional) Any container images that are in use. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Install Red Hat Advanced Cluster Management (RHACM). Note It is highly recommended that you create a recovery partition. The following is an example SiteConfig custom resource (CR) for a recovery partition of 50 GB: nodes: - hostName: "node-1.example.com" role: "master" rootDeviceHints: hctl: "0:2:0:0" deviceName: /dev/sda ........ ........ #Disk /dev/sda: 893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/sda partitions: - mount_point: /var/recovery size: 51200 start: 800000 Procedure Save the contents of the ClusterGroupUpgrade CR with the backup and enable fields set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 To start the update, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check the status of the upgrade in the hub cluster by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "backup": { "clusters": [ "cnfdb2", "cnfdb1" ], "status": { "cnfdb1": "Succeeded", "cnfdb2": "Failed" 1 } }, "computedMaxConcurrency": 1, "conditions": [ { "lastTransitionTime": "2022-04-05T10:37:19Z", "message": "Backup failed for 1 cluster", 2 "reason": "PartiallyDone", 3 "status": "True", 4 "type": "Succeeded" } ], "precaching": { "spec": {} }, "status": {} 1 Backup has failed for one cluster. 2 The message confirms that the backup failed for one cluster. 3 The backup was partially successful. 4 The backup process has finished. 19.10.7.2. Recovering a cluster after a failed upgrade If an upgrade of a cluster fails, you can manually log in to the cluster and use the backup to return the cluster to its preupgrade state. There are two stages: Rollback If the attempted upgrade included a change to the platform OS deployment, you must roll back to the version before running the recovery script. Important A rollback is only applicable to upgrades from TALM and single-node OpenShift. This process does not apply to rollbacks from any other upgrade type. Recovery The recovery shuts down containers and uses files from the backup partition to relaunch containers and restore clusters. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Install Red Hat Advanced Cluster Management (RHACM). Log in as a user with cluster-admin privileges. Run an upgrade that is configured for backup. Procedure Delete the previously created ClusterGroupUpgrade custom resource (CR) by running the following command: USD oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno Log in to the cluster that you want to recover. Check the status of the platform OS deployment by running the following command: USD ostree admin status Example outputs [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9 1 The current deployment is pinned. A platform OS deployment rollback is not necessary. [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca 1 This platform OS deployment is marked for rollback. 2 The deployment is pinned and can be rolled back. To trigger a rollback of the platform OS deployment, run the following command: USD rpm-ostree rollback -r The first phase of the recovery shuts down containers and restores files from the backup partition to the targeted directories. To begin the recovery, run the following command: USD /var/recovery/upgrade-recovery.sh When prompted, reboot the cluster by running the following command: USD systemctl reboot After the reboot, restart the recovery by running the following command: USD /var/recovery/upgrade-recovery.sh --resume Note If the recovery utility fails, you can retry with the --restart option: USD /var/recovery/upgrade-recovery.sh --restart Verification To check the status of the recovery run the following command: USD oc get clusterversion,nodes,clusteroperator Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.12.23 True False 86d Cluster version is 4.4.12.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.12.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.12.23 True False False 86d .............. 1 The cluster version is available and has the correct version. 2 The node status is Ready . 3 The ClusterOperator object's availability is True . 19.10.8. Using the container image pre-cache feature Single-node OpenShift clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. Note The time of the update is not set by TALM. You can apply the ClusterGroupUpgrade CR at the beginning of the update by manual application or by external automation. The container image pre-caching starts when the preCaching field is set to true in the ClusterGroupUpgrade CR. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. After a successful pre-caching process, you can start remediating policies. The remediation actions start when the enable field is set to true . If there is a pre-caching failure on a cluster, the upgrade fails for that cluster. The upgrade process continues for all other clusters that have a successful pre-cache. The pre-caching process can be in the following statuses: NotStarted This is the initial state all clusters are automatically assigned to on the first reconciliation pass of the ClusterGroupUpgrade CR. In this state, TALM deletes any pre-caching namespace and hub view resources of spoke clusters that remain from incomplete updates. TALM then creates a new ManagedClusterView resource for the spoke pre-caching namespace to verify its deletion in the PrecachePreparing state. PreparingToStart Cleaning up any remaining resources from incomplete updates is in progress. Starting Pre-caching job prerequisites and the job are created. Active The job is in "Active" state. Succeeded The pre-cache job succeeded. PrecacheTimeout The artifact pre-caching is partially done. UnrecoverableError The job ends with a non-zero exit code. 19.10.8.1. Creating a ClusterGroupUpgrade CR with pre-caching For single-node OpenShift, the pre-cache feature allows the required container images to be present on the spoke cluster before the update starts. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Procedure Save the contents of the ClusterGroupUpgrade CR with the preCaching field set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 1 The preCaching field is set to true , which enables TALM to pull the container images before starting the update. When you want to start pre-caching, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check if the ClusterGroupUpgrade CR exists in the hub cluster by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1 1 The CR is created. Check the status of the pre-caching task by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "conditions": [ { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is required and not done", "reason": "InProgress", "status": "False", "type": "PrecachingSucceeded" }, { "lastTransitionTime": "2022-01-27T19:07:34Z", "message": "Pre-caching spec is valid and consistent", "reason": "PrecacheSpecIsWellFormed", "status": "True", "type": "PrecacheSpecValid" } ], "precaching": { "clusters": [ "cnfdb1" 1 "cnfdb2" ], "spec": { "platformImage": "image.example.io"}, "status": { "cnfdb1": "Active" "cnfdb2": "Succeeded"} } } 1 Displays the list of identified clusters. Check the status of the pre-caching job by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache Example output NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s Check the status of the ClusterGroupUpgrade CR by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output "conditions": [ { "lastTransitionTime": "2022-01-27T19:30:41Z", "message": "The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies", "reason": "UpgradeCompleted", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:28:57Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingSucceeded" 1 } 1 The pre-cache tasks are done. 19.10.9. Troubleshooting the Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) is an OpenShift Container Platform Operator that remediates RHACM policies. When issues occur, use the oc adm must-gather command to gather details and logs and to take steps in debugging the issues. For more information about related topics, see the following documentation: Red Hat Advanced Cluster Management for Kubernetes 2.4 Support Matrix Red Hat Advanced Cluster Management Troubleshooting The "Troubleshooting Operator issues" section 19.10.9.1. General troubleshooting You can determine the cause of the problem by reviewing the following questions: Is the configuration that you are applying supported? Are the RHACM and the OpenShift Container Platform versions compatible? Are the TALM and RHACM versions compatible? Which of the following components is causing the problem? Section 19.10.9.3, "Managed policies" Section 19.10.9.4, "Clusters" Section 19.10.9.5, "Remediation Strategy" Section 19.10.9.6, "Topology Aware Lifecycle Manager" To ensure that the ClusterGroupUpgrade configuration is functional, you can do the following: Create the ClusterGroupUpgrade CR with the spec.enable field set to false . Wait for the status to be updated and go through the troubleshooting questions. If everything looks as expected, set the spec.enable field to true in the ClusterGroupUpgrade CR. Warning After you set the spec.enable field to true in the ClusterUpgradeGroup CR, the update procedure starts and you cannot edit the CR's spec fields anymore. 19.10.9.2. Cannot modify the ClusterUpgradeGroup CR Issue You cannot edit the ClusterUpgradeGroup CR after enabling the update. Resolution Restart the procedure by performing the following steps: Remove the old ClusterGroupUpgrade CR by running the following command: USD oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name> Check and fix the existing issues with the managed clusters and policies. Ensure that all the clusters are managed clusters and available. Ensure that all the policies exist and have the spec.remediationAction field set to inform . Create a new ClusterGroupUpgrade CR with the correct configurations. USD oc apply -f <ClusterGroupUpgradeCR_YAML> 19.10.9.3. Managed policies Checking managed policies on the system Issue You want to check if you have the correct managed policies on the system. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}' Example output ["group-du-sno-validator-du-validator-policy", "policy2-common-nto-sub-policy", "policy3-common-ptp-sub-policy"] Checking remediationAction mode Issue You want to check if the remediationAction field is set to inform in the spec of the managed policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h Checking policy compliance state Issue You want to check the compliance state of policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h 19.10.9.4. Clusters Checking if managed clusters are present Issue You want to check if the clusters in the ClusterGroupUpgrade CR are managed clusters. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h Alternatively, check the TALM manager logs: Get the name of the TALM manager by running the following command: USD oc get pod -n openshift-operators Example output NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m Check the TALM manager logs by running the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 The error message shows that the cluster is not a managed cluster. Checking if managed clusters are available Issue You want to check if the managed clusters specified in the ClusterGroupUpgrade CR are available. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2 1 2 The value of the AVAILABLE field is True for the managed clusters. Checking clusterLabelSelector Issue You want to check if the clusterLabelSelector field specified in the ClusterGroupUpgrade CR matches at least one of the managed clusters. Resolution Run the following command: USD oc get managedcluster --selector=upgrade=true 1 1 The label for the clusters you want to update is upgrade:true . Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Checking if canary clusters are present Issue You want to check if the canary clusters are present in the list of clusters. Example ClusterGroupUpgrade CR spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true Resolution Run the following commands: USD oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}' Example output ["spoke1", "spoke3"] Check if the canary clusters are present in the list of clusters that match clusterLabelSelector labels by running the following command: USD oc get managedcluster --selector=upgrade=true Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Note A cluster can be present in spec.clusters and also be matched by the spec.clusterLabelSelector label. Checking the pre-caching status on spoke clusters Check the status of pre-caching by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache 19.10.9.5. Remediation Strategy Checking if remediationStrategy is present in the ClusterGroupUpgrade CR Issue You want to check if the remediationStrategy is present in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}' Example output {"maxConcurrency":2, "timeout":240} Checking if maxConcurrency is specified in the ClusterGroupUpgrade CR Issue You want to check if the maxConcurrency is specified in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}' Example output 2 19.10.9.6. Topology Aware Lifecycle Manager Checking condition message and status in the ClusterGroupUpgrade CR Issue You want to check the value of the status.conditions field in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.conditions}' Example output {"lastTransitionTime":"2022-02-17T22:25:28Z", "message":"Missing managed policies:[policyList]", "reason":"NotAllManagedPoliciesExist", "status":"False", "type":"Validated"} Checking corresponding copied policies Issue You want to check if every policy from status.managedPoliciesForUpgrade has a corresponding policy in status.copiedPolicies . Resolution Run the following command: USD oc get cgu lab-upgrade -oyaml Example output status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default Checking if status.remediationPlan was computed Issue You want to check if status.remediationPlan is computed. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}' Example output [["spoke2", "spoke3"]] Errors in the TALM manager container Issue You want to check the logs of the manager container of TALM. Resolution Run the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 Displays the error. Clusters are not compliant to some policies after a ClusterGroupUpgrade CR has completed Issue The policy compliance status that TALM uses to decide if remediation is needed has not yet fully updated for all clusters. This may be because: The CGU was run too soon after a policy was created or updated. The remediation of a policy affects the compliance of subsequent policies in the ClusterGroupUpgrade CR. Resolution Create a new and apply ClusterGroupUpdate CR with the same specification . Additional resources For information about troubleshooting, see OpenShift Container Platform Troubleshooting Operator Issues . For more information about using Topology Aware Lifecycle Manager in the ZTP workflow, see Updating managed policies with Topology Aware Lifecycle Manager . For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD 19.11. Updating managed clusters in a disconnected environment with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of OpenShift Container Platform managed clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. Additional resources For more information about the Topology Aware Lifecycle Manager, see About the Topology Aware Lifecycle Manager . 19.11.1. Updating clusters in a disconnected environment You can upgrade managed clusters and Operators for managed clusters that you have deployed using GitOps ZTP and Topology Aware Lifecycle Manager (TALM). 19.11.1.1. Setting up the environment TALM can perform both platform and Operator updates. You must mirror both the platform image and Operator images that you want to update to in your mirror registry before you can use TALM to update your disconnected clusters. Complete the following steps to mirror the images: For platform updates, you must perform the following steps: Mirror the desired OpenShift Container Platform image repository. Ensure that the desired platform image is mirrored by following the "Mirroring the OpenShift Container Platform image repository" procedure linked in the Additional resources. Save the contents of the imageContentSources section in the imageContentSources.yaml file: Example output imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Save the image signature of the desired platform image that was mirrored. You must add the image signature to the PolicyGenTemplate CR for platform updates. To get the image signature, perform the following steps: Specify the desired OpenShift Container Platform tag by running the following command: USD OCP_RELEASE_NUMBER=<release_version> Specify the architecture of the server by running the following command: USD ARCHITECTURE=<server_architecture> Get the release image digest from Quay by running the following command USD DIGEST="USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')" Set the digest algorithm by running the following command: USD DIGEST_ALGO="USD{DIGEST%%:*}" Set the digest signature by running the following command: USD DIGEST_ENCODED="USD{DIGEST#*:}" Get the image signature from the mirror.openshift.com website by running the following command: USD SIGNATURE_BASE64=USD(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1" | base64 -w0 && echo) Save the image signature to the checksum-<OCP_RELEASE_NUMBER>.yaml file by running the following commands: USD cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF Prepare the update graph. You have two options to prepare the update graph: Use the OpenShift Update Service. For more information about how to set up the graph on the hub cluster, see Deploy the operator for OpenShift Update Service and Build the graph data init container . Make a local copy of the upstream graph. Host the update graph on an http or https server in the disconnected environment that has access to the managed cluster. To download the update graph, use the following command: USD curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.12 -o ~/upgrade-graph_stable-4.12 For Operator updates, you must perform the following task: Mirror the Operator catalogs. Ensure that the desired operator images are mirrored by following the procedure in the "Mirroring Operator catalogs for use with disconnected clusters" section. Additional resources For more information about how to update ZTP, see Upgrading GitOps ZTP . For more information about how to mirror an OpenShift Container Platform image repository, see Mirroring the OpenShift Container Platform image repository . For more information about how to mirror Operator catalogs for disconnected clusters, see Mirroring Operator catalogs for use with disconnected clusters . For more information about how to prepare the disconnected environment and mirroring the desired image repository, see Preparing the disconnected environment . For more information about update channels and releases, see Understanding update channels and releases . 19.11.1.2. Performing a platform update You can perform a platform update with the TALM. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update ZTP to the latest version. Provision one or more managed clusters with ZTP. Mirror the desired image repository. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Create a PolicyGenTemplate CR for the platform update: Save the following contents of the PolicyGenTemplate CR in the du-upgrade.yaml file. Example of PolicyGenTemplate for platform update apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: "platform-upgrade-prep" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: "platform-upgrade-prep" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: "platform-upgrade" metadata: name: version spec: channel: "stable-4.12" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.12 desiredUpdate: version: 4.12.4 status: history: - version: 4.12.4 state: "Completed" 1 The ConfigMap CR contains the signature of the desired release image to update to. 2 Shows the image signature of the desired OpenShift Container Platform release. Get the signature from the checksum-USD{OCP_RELEASE_NUMBER}.yaml file you saved when following the procedures in the "Setting up the environment" section. 3 Shows the mirror repository that contains the desired OpenShift Container Platform image. Get the mirrors from the imageContentSources.yaml file that you saved when following the procedures in the "Setting up the environment" section. 4 Shows the ClusterVersion CR to trigger the update. The channel , upstream , and desiredVersion fields are all required for image pre-caching. The PolicyGenTemplate CR generates two policies: The du-upgrade-platform-upgrade-prep policy does the preparation work for the platform update. It creates the ConfigMap CR for the desired release image signature, creates the image content source of the mirrored release image repository, and updates the cluster version with the desired update channel and the update graph reachable by the managed cluster in the disconnected environment. The du-upgrade-platform-upgrade policy is used to perform platform upgrade. Add the du-upgrade.yaml file contents to the kustomization.yaml file located in the ZTP Git repository for the PolicyGenTemplate CRs and push the changes to the Git repository. ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster. Check the created policies by running the following command: USD oc get policies -A | grep platform-upgrade Create the ClusterGroupUpdate CR for the platform update with the spec.enable field set to false . Save the content of the platform update ClusterGroupUpdate CR with the du-upgrade-platform-upgrade-prep and the du-upgrade-platform-upgrade policies and the target clusters to the cgu-platform-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false Apply the ClusterGroupUpdate CR to the hub cluster by running the following command: USD oc apply -f cgu-platform-upgrade.yml Optional: Pre-cache the images for the platform update. Enable pre-caching in the ClusterGroupUpdate CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the hub cluster: USD oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}' Start the platform update: Enable the cgu-platform-upgrade policy and disable pre-caching by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Additional resources For more information about mirroring the images in a disconnected environment, see Preparing the disconnected environment . 19.11.1.3. Performing an Operator update You can perform an Operator update with the TALM. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update ZTP to the latest version. Provision one or more managed clusters with ZTP. Mirror the desired index image, bundle images, and all Operator images referenced in the bundle images. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Update the PolicyGenTemplate CR for the Operator update. Update the du-upgrade PolicyGenTemplate CR with the following additional contents in the du-upgrade.yaml file: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.12 1 updateStrategy: 2 registryPoll: interval: 1h 1 The index image URL contains the desired Operator images. If the index images are always pushed to the same image name and tag, this change is not needed. 2 Set how frequently the Operator Lifecycle Manager (OLM) polls the index image for new Operator versions with the registryPoll.interval field. This change is not needed if a new index image tag is always pushed for y-stream and z-stream Operator updates. The registryPoll.interval field can be set to a shorter interval to expedite the update, however shorter intervals increase computational load. To counteract this, you can restore registryPoll.interval to the default value once the update is complete. This update generates one policy, du-upgrade-operator-catsrc-policy , to update the redhat-operators-disconnected catalog source with the new index images that contain the desired Operators images. Note If you want to use the image pre-caching for Operators and there are Operators from a different catalog source other than redhat-operators-disconnected , you must perform the following tasks: Prepare a separate catalog source policy with the new index image or registry poll interval update for the different catalog source. Prepare a separate subscription policy for the desired Operators that are from the different catalog source. For example, the desired SRIOV-FEC Operator is available in the certified-operators catalog source. To update the catalog source and the Operator subscription, add the following contents to generate two policies, du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "fec-catsrc-policy" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: "subscriptions-fec-policy" spec: channel: "stable" source: certified-operators Remove the specified subscriptions channels in the common PolicyGenTemplate CR, if they exist. The default subscriptions channels from the ZTP image are used for the update. Note The default channel for the Operators applied through ZTP 4.12 is stable , except for the performance-addon-operator . As of OpenShift Container Platform 4.11, the performance-addon-operator functionality was moved to the node-tuning-operator . For the 4.10 release, the default channel for PAO is v4.10 . You can also specify the default channels in the common PolicyGenTemplate CR. Push the PolicyGenTemplate CRs updates to the ZTP Git repository. ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster. Check the created policies by running the following command: USD oc get policies -A | grep -E "catsrc-policy|subscription" Apply the required catalog source updates before starting the Operator update. Save the content of the ClusterGroupUpgrade CR named operator-upgrade-prep with the catalog source policies and the target managed clusters to the cgu-operator-upgrade-prep.yml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1 Apply the policy to the hub cluster by running the following command: USD oc apply -f cgu-operator-upgrade-prep.yml Monitor the update process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies -A | grep -E "catsrc-policy" Create the ClusterGroupUpgrade CR for the Operator update with the spec.enable field set to false . Save the content of the Operator update ClusterGroupUpgrade CR with the du-upgrade-operator-catsrc-policy policy and the subscription policies created from the common PolicyGenTemplate and the target clusters to the cgu-operator-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false 1 The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source. 2 The policy contains Operator subscriptions. If you have followed the structure and content of the reference PolicyGenTemplates , all Operator subscriptions are grouped into the common-subscriptions-policy policy. Note One ClusterGroupUpgrade CR can only pre-cache the images of the desired Operators defined in the subscription policy from one catalog source included in the ClusterGroupUpgrade CR. If the desired Operators are from different catalog sources, such as in the example of the SRIOV-FEC Operator, another ClusterGroupUpgrade CR must be created with du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy policies for the SRIOV-FEC Operator images pre-caching and update. Apply the ClusterGroupUpgrade CR to the hub cluster by running the following command: USD oc apply -f cgu-operator-upgrade.yml Optional: Pre-cache the images for the Operator update. Before starting image pre-caching, verify the subscription policy is NonCompliant at this point by running the following command: USD oc get policy common-subscriptions-policy -n <policy_namespace> Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d Enable pre-caching in the ClusterGroupUpgrade CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster: USD oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}' Check if the pre-caching is completed before starting the update by running the following command: USD oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq Example output [ { "lastTransitionTime": "2022-03-08T20:49:08.000Z", "message": "The ClusterGroupUpgrade CR is not enabled", "reason": "UpgradeNotStarted", "status": "False", "type": "Ready" }, { "lastTransitionTime": "2022-03-08T20:55:30.000Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingDone" } ] Start the Operator update. Enable the cgu-operator-upgrade ClusterGroupUpgrade CR and disable pre-caching to start the Operator update by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Additional resources For more information about updating GitOps ZTP, see Upgrading GitOps ZTP . Troubleshooting missed Operator updates due to out-of-date policy compliance states . 19.11.1.3.1. Troubleshooting missed Operator updates due to out-of-date policy compliance states In some scenarios, Topology Aware Lifecycle Manager (TALM) might miss Operator updates due to an out-of-date policy compliance state. After a catalog source update, it takes time for the Operator Lifecycle Manager (OLM) to update the subscription status. The status of the subscription policy might continue to show as compliant while TALM decides whether remediation is needed. As a result, the Operator specified in the subscription policy does not get upgraded. To avoid this scenario, add another catalog source configuration to the PolicyGenTemplate and specify this configuration in the subscription for any Operators that require an update. Procedure Add a catalog source configuration in the PolicyGenTemplate resource: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olredhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY 1 Update the name for the new configuration. 2 Update the display name for the new configuration. 3 Update the index image URL. This fileName.spec.image field overrides any configuration in the DefaultCatsrc.yaml file. Update the Subscription resource to point to the new configuration for Operators that require an update: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace # ... spec: source: redhat-operators-disconnected-v2 1 # ... 1 Enter the name of the additional catalog source configuration that you defined in the PolicyGenTemplate resource. 19.11.1.4. Performing a platform and an Operator update together You can perform a platform and an Operator update at the same time. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update ZTP to the latest version. Provision one or more managed clusters with ZTP. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Create the PolicyGenTemplate CR for the updates by following the steps described in the "Performing a platform update" and "Performing an Operator update" sections. Apply the prep work for the platform and the Operator update. Save the content of the ClusterGroupUpgrade CR with the policies for platform update preparation work, catalog source updates, and target clusters to the cgu-platform-operator-upgrade-prep.yml file, for example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true Apply the cgu-platform-operator-upgrade-prep.yml file to the hub cluster by running the following command: USD oc apply -f cgu-platform-operator-upgrade-prep.yml Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Create the ClusterGroupUpdate CR for the platform and the Operator update with the spec.enable field set to false . Save the contents of the platform and Operator update ClusterGroupUpdate CR with the policies and the target clusters to the cgu-platform-operator-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false 1 This is the platform update policy. 2 This is the policy containing the catalog source information for the Operators to be updated. It is needed for the pre-caching feature to determine which Operator images to download to the managed cluster. 3 This is the policy to update the Operators. Apply the cgu-platform-operator-upgrade.yml file to the hub cluster by running the following command: USD oc apply -f cgu-platform-operator-upgrade.yml Optional: Pre-cache the images for the platform and the Operator update. Enable pre-caching in the ClusterGroupUpgrade CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster: USD oc get jobs,pods -n openshift-talm-pre-cache Check if the pre-caching is completed before starting the update by running the following command: USD oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}' Start the platform and Operator update. Enable the cgu-du-upgrade ClusterGroupUpgrade CR to start the platform and the Operator update by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Note The CRs for the platform and Operator updates can be created from the beginning by configuring the setting to spec.enable: true . In this case, the update starts immediately after pre-caching completes and there is no need to manually enable the CR. Both pre-caching and the update create extra resources, such as policies, placement bindings, placement rules, managed cluster actions, and managed cluster view, to help complete the procedures. Setting the afterCompletion.deleteObjects field to true deletes all these resources after the updates complete. 19.11.1.5. Removing Performance Addon Operator subscriptions from deployed clusters In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 or later, these functions are part of the Node Tuning Operator. Do not install the Performance Addon Operator on clusters running OpenShift Container Platform 4.11 or later. If you upgrade to OpenShift Container Platform 4.11 or later, the Node Tuning Operator automatically removes the Performance Addon Operator. Note You need to remove any policies that create Performance Addon Operator subscriptions to prevent a re-installation of the Operator. The reference DU profile includes the Performance Addon Operator in the PolicyGenTemplate CR common-ranGen.yaml . To remove the subscription from deployed managed clusters, you must update common-ranGen.yaml . Note If you install Performance Addon Operator 4.10.3-5 or later on OpenShift Container Platform 4.11 or later, the Performance Addon Operator detects the cluster version and automatically hibernates to avoid interfering with the Node Tuning Operator functions. However, to ensure best performance, remove the Performance Addon Operator from your OpenShift Container Platform 4.11 clusters. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for ArgoCD. Update to OpenShift Container Platform 4.11 or later. Log in as a user with cluster-admin privileges. Procedure Change the complianceType to mustnothave for the Performance Addon Operator namespace, Operator group, and subscription in the common-ranGen.yaml file. - fileName: PaoSubscriptionNS.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: "subscriptions-policy" complianceType: mustnothave Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The status of the common-subscriptions-policy policy changes to Non-Compliant . Apply the change to your target clusters by using the Topology Aware Lifecycle Manager. For more information about rolling out configuration changes, see the "Additional resources" section. Monitor the process. When the status of the common-subscriptions-policy policy for a target cluster is Compliant , the Performance Addon Operator has been removed from the cluster. Get the status of the common-subscriptions-policy by running the following command: USD oc get policy -n ztp-common common-subscriptions-policy Delete the Performance Addon Operator namespace, Operator group and subscription CRs from .spec.sourceFiles in the common-ranGen.yaml file. Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The policy remains compliant. Additional resources For more information about the TALM pre-caching workflow, see Using the container image pre-cache feature . 19.11.2. About the auto-created ClusterGroupUpgrade CR for ZTP TALM has a controller called ManagedClusterForCGU that monitors the Ready state of the ManagedCluster CRs on the hub cluster and creates the ClusterGroupUpgrade CRs for ZTP (zero touch provisioning). For any managed cluster in the Ready state without a "ztp-done" label applied, the ManagedClusterForCGU controller automatically creates a ClusterGroupUpgrade CR in the ztp-install namespace with its associated RHACM policies that are created during the ZTP process. TALM then remediates the set of configuration policies that are listed in the auto-created ClusterGroupUpgrade CR to push the configuration CRs to the managed cluster. Note If the managed cluster has no bound policies when the cluster becomes Ready , no ClusterGroupUpgrade CR is created. Example of an auto-created ClusterGroupUpgrade CR for ZTP apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: "46666836" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: "" 1 deleteClusterLabels: ztp-running: "" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: "" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240 1 Applied to the managed cluster when TALM completes the cluster configuration. 2 Applied to the managed cluster when TALM starts deploying the configuration policies. 19.12. Updating GitOps ZTP You can update the Gitops zero touch provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters. Note You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements. 19.12.1. Overview of the GitOps ZTP update process You can update GitOps zero touch provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters. Note Any changes to policy settings, including adding recommended content, results in updated polices that must be rolled out to the managed clusters and reconciled. At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows: Label all existing clusters with the ztp-done label. Stop the ArgoCD applications. Install the new GitOps ZTP tools. Update required content and optional changes in the Git repository. Update and restart the application configuration. 19.12.2. Preparing for the upgrade Use the following procedure to prepare your site for the GitOps zero touch provisioning (ZTP) upgrade. Procedure Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP. Extract the argocd/deployment directory by using the following commands: USD mkdir -p ./update USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./update The /update directory contains the following subdirectories: update/extra-manifest : contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap . update/source-crs : contains the source CR files that the PolicyGenTemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. update/argocd/deployment : contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. update/argocd/example : contains example SiteConfig and PolicyGenTemplate files that represent the recommended configuration. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository. If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository. Important When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files. 19.12.3. Labeling the existing clusters To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label. Note This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done . Procedure Find a label selector that lists the managed clusters that were deployed with zero touch provisioning (ZTP), such as local-cluster!=true : USD oc get managedcluster -l 'local-cluster!=true' Ensure that the resulting list contains all the managed clusters that were deployed with ZTP, and then use that selector to add the ztp-done label: USD oc label managedcluster -l 'local-cluster!=true' ztp-done= 19.12.4. Stopping the existing GitOps ZTP applications Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available. Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first. Procedure Perform a non-cascaded delete on the clusters application to leave all generated resources in place: USD oc delete -f update/argocd/deployment/clusters-app.yaml Perform a cascaded delete on the policies application to remove all policies: USD oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge USD oc delete -f update/argocd/deployment/policies-app.yaml 19.12.5. Required changes to the Git repository When upgrading the ztp-site-generate container from an earlier release of GitOps ZTP to v4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes. Make required changes to PolicyGenTemplate files: All PolicyGenTemplate files must be created in a Namespace prefixed with ztp . This ensures that the GitOps zero touch provisioning (ZTP) application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally. Add the kustomization.yaml file to the repository: All SiteConfig and PolicyGenTemplate CRs must be included in a kustomization.yaml file under their respective directory trees. For example: ├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml Note The files listed in the generator sections must contain either SiteConfig or PolicyGenTemplate CRs only. If your existing YAML files contain other CRs, for example, Namespace , these other CRs must be pulled out into separate files and listed in the resources section. The PolicyGenTemplate kustomization file must contain all PolicyGenTemplate YAML files in the generator section and Namespace CRs in the resources section. For example: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml Remove the pre-sync.yaml and post-sync.yaml files. In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster. Note There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and PolicyGenTemplate trees. Review and incorporate recommended changes Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform. Review the reference SiteConfig and PolicyGenTemplate CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container. 19.12.6. Installing the new GitOps ZTP applications Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured. Procedure To patch the ArgoCD instance in the hub cluster by using the patch file that you previously extracted into the update/argocd/deployment/ directory, enter the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file update/argocd/deployment/argocd-openshift-gitops-patch.json To apply the contents of the argocd/deployment directory, enter the following command: USD oc apply -k update/argocd/deployment 19.12.7. Rolling out the GitOps ZTP configuration changes If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the ZTP GitOps v4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently. To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update. Additional resources For information about the Topology Aware Lifecycle Manager (TALM), see About the Topology Aware Lifecycle Manager configuration . For information about creating ClusterGroupUpgrade CRs, see About the auto-created ClusterGroupUpgrade CR for ZTP . 19.13. Expanding single-node OpenShift clusters with GitOps ZTP You can expand single-node OpenShift clusters with GitOps ZTP. When you add worker nodes to single-node OpenShift clusters, the original single-node OpenShift cluster retains the control plane node role. Adding worker nodes does not require any downtime for the existing single-node OpenShift cluster. Note Although there is no specified limit on the number of worker nodes that you can add to a single-node OpenShift cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes. If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning MachineConfig objects are rendered and associated with the worker machine config pool before the GitOps ZTP workflow applies the MachineConfig ignition file to the worker node. It is recommended that you first remediate the policies, and then install the worker node. If you create the workload partitioning manifests after installing the worker node, you must drain the node manually and delete all the pods managed by daemon sets. When the managing daemon sets create the new pods, the new pods undergo the workload partitioning process. Important Adding worker nodes to single-node OpenShift clusters with GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources For more information about single-node OpenShift clusters tuned for vDU application deployments, see Reference configuration for deploying vDUs on single-node OpenShift . For more information about worker nodes, see Adding worker nodes to single-node OpenShift clusters . 19.13.1. Applying profiles to the worker node You can configure the additional worker node with a DU profile. You can apply a RAN distributed unit (DU) profile to the worker node cluster using the ZTP GitOps common, group, and site-specific PolicyGenTemplate resources. The GitOps ZTP pipeline that is linked to the ArgoCD policies application includes the following CRs that you can find in the out/argocd/example/policygentemplates folder when you extract the ztp-site-generate container: common-ranGen.yaml group-du-sno-ranGen.yaml example-sno-site.yaml ns.yaml kustomization.yaml Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a ClusterGroupUpgrade CR to reconcile the policies in the group of clusters. 19.13.2. (Optional) Ensuring PTP and SR-IOV daemon selector compatibility If the DU profile was deployed using the GitOps ZTP plugin version 4.11 or earlier, the PTP and SR-IOV Operators might be configured to place the daemons only on nodes labelled as master . This configuration prevents the PTP and SR-IOV daemons from operating on the worker node. If the PTP and SR-IOV daemon node selectors are incorrectly configured on your system, you must change the daemons before proceeding with the worker DU profile configuration. Procedure Check the daemon node selector settings of the PTP Operator on one of the spoke clusters: USD oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq Example output for PTP Operator {"daemonNodeSelector":{"node-role.kubernetes.io/master":""}} 1 1 If the node selector is set to master , the spoke was deployed with the version of the ZTP plugin that requires changes. Check the daemon node selector settings of the SR-IOV Operator on one of the spoke clusters: USD oc get sriovoperatorconfig/default -n \ openshift-sriov-network-operator -ojsonpath='{.spec}' | jq Example output for SR-IOV Operator {"configDaemonNodeSelector":{"node-role.kubernetes.io/worker":""},"disableDrain":false,"enableInjector":true,"enableOperatorWebhook":true} 1 1 If the node selector is set to master , the spoke was deployed with the version of the ZTP plugin that requires changes. In the group policy, add the following complianceType and spec entries: spec: - fileName: PtpOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" Important Changing the daemonNodeSelector field causes temporary PTP synchronization loss and SR-IOV connectivity loss. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. 19.13.3. PTP and SR-IOV node selector compatibility The PTP configuration resources and SR-IOV network node policies use node-role.kubernetes.io/master: "" as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the "node-role.kubernetes.io/worker" label. 19.13.4. Using PolicyGenTemplate CRs to apply worker node policies to worker nodes You can create policies for worker nodes. Procedure Create the following policy template: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-sno-workers" namespace: "example-sno" spec: bindingRules: sites: "example-sno" 1 mcp: "worker" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: "config-policy" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: "4-47" reserved: "0-3" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: "config-policy" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker 1 The policies are applied to all clusters with this label. 2 The MCP field must be set to worker . 3 This generic MachineConfig CR is used to configure workload partitioning on the worker node. 4 The cpu.isolated and cpu.reserved fields must be configured for each particular hardware platform. 5 The cmdline_crash CPU set must match the cpu.isolated set in the PerformanceProfile section. A generic MachineConfig CR is used to configure workload partitioning on the worker node. You can generate the content of crio and kubelet configuration files. Add the created policy template to the Git repository monitored by the ArgoCD policies application. Add the policy in the kustomization.yaml file. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. To remediate the new policies to your spoke cluster, create a TALM custom resource: USD cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF 19.13.5. Adding worker nodes to single-node OpenShift clusters with GitOps ZTP You can add one or more worker nodes to existing single-node OpenShift clusters to increase available CPU resources in the cluster. Prerequisites Install and configure RHACM 2.6 or later in an OpenShift Container Platform 4.11 or later bare-metal hub cluster Install Topology Aware Lifecycle Manager in the hub cluster Install Red Hat OpenShift GitOps in the hub cluster Use the GitOps ZTP ztp-site-generate container image version 4.12 or later Deploy a managed single-node OpenShift cluster with GitOps ZTP Configure the Central Infrastructure Management as described in the RHACM documentation Configure the DNS serving the cluster to resolve the internal API endpoint api-int.<cluster_name>.<base_domain> Procedure If you deployed your cluster by using the example-sno.yaml SiteConfig manifest, add your new worker node to the spec.clusters['example-sno'].nodes list: nodes: - hostName: "example-node2.example.com" role: "worker" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node2-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up macAddress: "AA:BB:CC:DD:EE:11" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Create a BMC authentication secret for the new host, as referenced by the bmcCredentialsName field in the spec.nodes section of your SiteConfig file: apiVersion: v1 data: password: "password" username: "username" kind: Secret metadata: name: "example-node2-bmh-secret" namespace: example-sno type: Opaque Commit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application. When the ArgoCD cluster application synchronizes, two new manifests appear on the hub cluster generated by the ZTP plugin: BareMetalHost NMStateConfig Important The cpuset field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete. Verification You can monitor the installation process in several ways. Check if the preprovisioning images are created by running the following command: USD oc get ppimg -n example-sno Example output NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated Check the state of the bare-metal hosts: USD oc get bmh -n example-sno Example output NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1 1 The provisioning state indicates that node booting from the installation media is in progress. Continuously monitor the installation process: Watch the agent install process by running the following command: USD oc get agent -n example-sno --watch Example output NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done When the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the ManagedClusterInfo status. Run the following command to see the status: USD oc get managedclusterinfo/example-sno -n example-sno -o \ jsonpath='{range .status.nodeList[*]}{.name}{"\t"}{.conditions}{"\t"}{.labels}{"\n"}{end}' Example output example-sno [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/master":"","node-role.kubernetes.io/worker":""} example-node2 [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/worker":""} 19.14. Pre-caching images for single-node OpenShift deployments In environments with limited bandwidth where you use the GitOps zero touch provisioning (ZTP) solution to deploy a large number of clusters, you want to avoid downloading all the images that are required for bootstrapping and installing OpenShift Container Platform. The limited bandwidth at remote single-node OpenShift sites can cause long deployment times. The factory-precaching-cli tool allows you to pre-stage servers before shipping them to the remote site for ZTP provisioning. The factory-precaching-cli tool does the following: Downloads the RHCOS rootfs image that is required by the minimal ISO to boot. Creates a partition from the installation disk labelled as data . Formats the disk in xfs. Creates a GUID Partition Table (GPT) data partition at the end of the disk, where the size of the partition is configurable by the tool. Copies the container images required to install OpenShift Container Platform. Copies the container images required by ZTP to install OpenShift Container Platform. Optional: Copies Day-2 Operators to the partition. Important The factory-precaching-cli tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 19.14.1. Getting the factory-precaching-cli tool The factory-precaching-cli tool Go binary is publicly available in the Telco RAN tools container image . The factory-precaching-cli tool Go binary in the container image is executed on the server running an RHCOS live image using podman . If you are working in a disconnected environment or have a private registry, you need to copy the image there so you can download the image to the server. Procedure Pull the factory-precaching-cli tool image by running the following command: # podman pull quay.io/openshift-kni/telco-ran-tools:latest Verification To check that the tool is available, query the current version of the factory-precaching-cli tool Go binary: # podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v Example output factory-precaching-cli version 20221018.120852+main.feecf17 19.14.2. Booting from a live operating system image You can use the factory-precaching-cli tool with to boot servers where only one disk is available and external disk drive cannot be attached to the server. Warning RHCOS requires the disk to not be in use when the disk is about to be written with an RHCOS image. Depending on the server hardware, you can mount the RHCOS live ISO on the blank server using one of the following methods: Using the Dell RACADM tool on a Dell server. Using the HPONCFG tool on a HP server. Using the Redfish BMC API. Note It is recommended to automate the mounting procedure. To automate the procedure, you need to pull the required images and host them on a local HTTP server. Prerequisites You powered up the host. You have network connectivity to the host. Procedure This example procedure uses the Redfish BMC API to mount the RHCOS live ISO. Mount the RHCOS live ISO: Check virtual media status: USD curl --globoff -H "Content-Type: application/json" -H \ "Accept: application/json" -k -X GET --user USD{username_password} \ https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool Mount the ISO file as a virtual media: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Image": "http://[USDHTTPd_IP]/RHCOS-live.iso"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia Set the boot order to boot from the virtual media once: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Boot":{ "BootSourceOverrideEnabled": "Once", "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self Reboot and ensure that the server is booting from virtual media. Additional resources For more information about the butane utility, see About Butane . For more information about creating a custom live RHCOS ISO, see Creating a custom live RHCOS ISO for remote server access . For more information about using the Dell RACADM tool, see Integrated Dell Remote Access Controller 9 RACADM CLI Guide . For more information about using the HP HPONCFG tool, see Using HPONCFG . For more information about using the Redfish BMC API, see Booting from an HTTP-hosted ISO image using the Redfish API . 19.14.3. Partitioning the disk To run the full pre-caching process, you have to boot from a live ISO and use the factory-precaching-cli tool from a container image to partition and pre-cache all the artifacts required. A live ISO or RHCOS live ISO is required because the disk must not be in use when the operating system (RHCOS) is written to the device during the provisioning. Single-disk servers can also be enabled with this procedure. Prerequisites You have a disk that is not partitioned. You have access to the quay.io/openshift-kni/telco-ran-tools:latest image. You have enough storage to install OpenShift Container Platform and pre-cache the required images. Procedure Verify that the disk is cleared: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk Erase any file system, RAID or partition table signatures from the device: # wipefs -a /dev/nvme0n1 Example output /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa Important The tool fails if the disk is not empty because it uses partition number 1 of the device for pre-caching the artifacts. 19.14.3.1. Creating the partition Once the device is ready, you create a single partition and a GPT partition table. The partition is automatically labelled as data and created at the end of the device. Otherwise, the partition will be overridden by the coreos-installer . Important The coreos-installer requires the partition to be created at the end of the device and to be labelled as data . Both requirements are necessary to save the partition when writing the RHCOS image to the disk. Prerequisites The container must run as privileged due to formatting host devices. You have to mount the /dev folder so that the process can be executed inside the container. Procedure In the following example, the size of the partition is 250 GiB due to allow pre-caching the DU profile for Day 2 Operators. Run the container as privileged and partition the disk: # podman run -v /dev:/dev --privileged \ --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli partition \ 1 -d /dev/nvme0n1 \ 2 -s 250 3 1 Specifies the partitioning function of the factory-precaching-cli tool. 2 Defines the root directory on the disk. 3 Defines the size of the disk in GB. Check the storage information: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part Verification You must verify that the following requirements are met: The device has a GPT partition table The partition uses the latest sectors of the device. The partition is correctly labeled as data . Query the disk status to verify that the disk is partitioned as expected: # gdisk -l /dev/nvme0n1 Example output GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data 19.14.3.2. Mounting the partition After verifying that the disk is partitioned correctly, you can mount the device into /mnt . Important It is recommended to mount the device into /mnt because that mounting point is used during ZTP preparation. Verify that the partition is formatted as xfs : # lsblk -f /dev/nvme0n1 Example output NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071 Mount the partition: # mount /dev/nvme0n1p1 /mnt/ Verification Check that the partition is mounted: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1 1 The mount point is /var/mnt because the /mnt folder in RHCOS is a link to /var/mnt . 19.14.4. Downloading the images The factory-precaching-cli tool allows you to download the following images to your partitioned server: OpenShift Container Platform images Operator images that are included in the distributed unit (DU) profile for 5G RAN sites Operator images from disconnected registries Note The list of available Operator images can vary in different OpenShift Container Platform releases. 19.14.4.1. Downloading with parallel workers The factory-precaching-cli tool uses parallel workers to download multiple images simultaneously. You can configure the number of workers with the --parallel or -p option. The default number is set to 80% of the available CPUs to the server. Note Your login shell may be restricted to a subset of CPUs, which reduces the CPUs available to the container. To remove this restriction, you can precede your commands with taskset 0xffffffff , for example: # taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help 19.14.4.2. Preparing to download the OpenShift Container Platform images To download OpenShift Container Platform container images, you need to know the multicluster engine version. When you use the --du-profile flag, you also need to specify the Red Hat Advanced Cluster Management (RHACM) version running in the hub cluster that is going to provision the single-node OpenShift. Prerequisites You have RHACM and the multicluster engine Operator installed. You partitioned the storage device. You have enough space for the images on the partitioned device. You connected the bare-metal server to the Internet. You have a valid pull secret. Procedure Check the RHACM version and the multicluster engine version by running the following commands in the hub cluster: USD oc get csv -A | grep -i advanced-cluster-management Example output open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded USD oc get csv -A | grep -i multicluster-engine Example output multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded To access the container registry, copy a valid pull secret on the server to be installed: Create the .docker folder: USD mkdir /root/.docker Copy the valid pull in the config.json file to the previously created .docker/ folder: USD cp config.json /root/.docker/config.json 1 1 /root/.docker/config.json is the default path where podman checks for the login credentials for the registry. Note If you use a different registry to pull the required artifacts, you need to copy the proper pull secret. If the local registry uses TLS, you need to include the certificates from the registry as well. 19.14.4.3. Downloading the OpenShift Container Platform images The factory-precaching-cli tool allows you to pre-cache all the container images required to provision a specific OpenShift Container Platform release. Procedure Pre-cache the release by running the following command: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- \ factory-precaching-cli download \ 1 -r 4.12.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf ... Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f ... Summary: Release: 4.12.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83 Verification Check that all the images are compressed in the target folder of server: USD ls -l /mnt 1 1 It is recommended that you pre-cache the images in the /mnt folder. Example output -rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz 19.14.4.4. Downloading the Operator images You can also pre-cache Day-2 Operators used in the 5G Radio Access Network (RAN) Distributed Unit (DU) cluster configuration. The Day-2 Operators depend on the installed OpenShift Container Platform version. Important You need to include the RHACM hub and multicluster engine Operator versions by using the --acm-version and --mce-version flags so the factory-precaching-cli tool can pre-cache the appropriate containers images for RHACM and the multicluster engine Operator. Procedure Pre-cache the Operator images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.12.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s 7 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 ... Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 ... Summary: Release: 4.12.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83 19.14.4.5. Pre-caching custom images in disconnected environments The --generate-imageset argument stops the factory-precaching-cli tool after the ImageSetConfiguration custom resource (CR) is generated. This allows you to customize the ImageSetConfiguration CR before downloading any images. After you customized the CR, you can use the --skip-imageset argument to download the images that you specified in the ImageSetConfiguration CR. You can customize the ImageSetConfiguration CR in the following ways: Add Operators and additional images Remove Operators and additional images Change Operator and catalog sources to local or disconnected registries Procedure Pre-cache the images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.12.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --generate-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --generate-imageset argument generates the ImageSetConfiguration CR only, which allows you to customize the CR. Example output Generated /mnt/imageset.yaml Example ImageSetConfiguration CR apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.0 1 maxVersion: 4.12.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.12' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.12 packages: - name: sriov-fec 11 channels: - name: 'stable' 1 The platform versions match the versions passed to the tool. 2 3 The versions of RHACM and the multicluster engine Operator match the versions passed to the tool. 4 5 6 7 8 9 10 11 The CR contains all the specified DU Operators. Customize the catalog resource in the CR: apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.12 packages: - name: sriov-fec channels: - name: 'stable' When you download images by using a local or disconnected registry, you have to first add certificates for the registries that you want to pull the content from. To avoid any errors, copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Then, update the certificates trust store: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download \ 1 -r 4.12.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --skip-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --skip-imageset argument allows you to download the images that you specified in your customized ImageSetConfiguration CR. Download the images without generating a new imageSetConfiguration CR: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.12.0 \ --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt \ --img quay.io/custom/repository \ --du-profile -s \ --skip-imageset Additional resources To access the online Red Hat registries, see OpenShift installation customization tools . For more information about using the multicluster engine, see About cluster lifecycle with the multicluster engine operator . 19.14.5. Pre-caching images in ZTP The SiteConfig manifest defines how an OpenShift cluster is to be installed and configured. In the ZTP provisioning workflow, the factory-precaching-cli tool requires the following additional fields in the SiteConfig manifest: clusters.ignitionConfigOverride nodes.installerArgs nodes.ignitionConfigOverride Example SiteConfig with additional fields apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-5g-lab" namespace: "example-5g-lab" spec: baseDomain: "example.domain.redhat.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "img4.9.10-x86-64-appsub" sshPublicKey: "ssh-rsa ..." clusters: - clusterName: "sno-worker-0" clusterLabels: group-du-sno: "" common-411: true sites : "example-5g-lab" vendor: "OpenShift" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: "OVNKubernetes" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-images.service\nBindsTo=precache-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-images.service" }, { "name": "precache-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached images in discovery stage\nAfter=var-mnt.mount\nBefore=agent.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ai.sh\n#TimeoutStopSec=30\n\n[Install]\nWantedBy=multi-user.target default.target\nWantedBy=agent.service" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ai.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200" } }, { "overwrite": true, "path": "/usr/local/bin/agent-fix-bz1964591", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true" } } ] } }' nodes: - hostName: "snonode.sno-worker-0.example.domain.redhat.com" role: "master" bmcAddress: "idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "worker0-bmh-secret" bootMACAddress: "e4:43:4b:bd:90:46" bootMode: "UEFI" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '["--save-partlabel", "data"]' ignitionConfigOverride: | { "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-ocp-images.service\nBindsTo=precache-ocp-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-ocp-images.service" }, { "name": "precache-ocp-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached OCP images into containers storage\nAfter=var-mnt.mount\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ocp.sh\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ocp.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: "AA:BB:CC:11:22:33" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "ens1f0" macAddress: "AA:BB:CC:11:22:33" 19.14.5.1. Understanding the clusters.ignitionConfigOverride field The clusters.ignitionConfigOverride field adds a configuration in Ignition format during the ZTP discovery stage. The configuration includes systemd services in the ISO mounted in virtual media. This way, the scripts are part of the discovery RHCOS live ISO and they can be used to load the Assisted Installer (AI) images. systemd services The systemd services are var-mnt.mount and precache-images.services . The precache-images.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The service calls a script called extract-ai.sh . extract-ai.sh The extract-ai.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. agent-fix-bz1964591 The agent-fix-bz1964591 script is a workaround for an AI issue. To prevent AI from removing the images, which can force the agent.service to pull the images again from the registry, the agent-fix-bz1964591 script checks if the requested container images exist. 19.14.5.2. Understanding the nodes.installerArgs field The nodes.installerArgs field allows you to configure how the coreos-installer utility writes the RHCOS live ISO to disk. You need to indicate to save the disk partition labeled as data because the artifacts saved in the data partition are needed during the OpenShift Container Platform installation stage. The extra parameters are passed directly to the coreos-installer utility that writes the live RHCOS to disk. On the reboot, the operating system starts from the disk. You can pass several options to the coreos-installer utility: OPTIONS: ... -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL ... --save-partlabel <lx>... Save partitions with this label glob --save-partindex <id>... Save partitions with this number or range ... --insecure-ignition Allow Ignition URL without HTTPS or hash 19.14.5.3. Understanding the nodes.ignitionConfigOverride field Similarly to clusters.ignitionConfigOverride , the nodes.ignitionConfigOverride field allows the addtion of configurations in Ignition format to the coreos-installer utility, but at the OpenShift Container Platform installation stage. When the RHCOS is written to disk, the extra configuration included in the ZTP discovery ISO is no longer available. During the discovery stage, the extra configuration is stored in the memory of the live OS. Note At this stage, the number of container images extracted and loaded is bigger than in the discovery stage. Depending on the OpenShift Container Platform release and whether you install the Day-2 Operators, the installation time can vary. At the installation stage, the var-mnt.mount and precache-ocp.services systemd services are used. precache-ocp.service The precache-ocp.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The precache-ocp.service service calls a script called extract-ocp.sh . Important To extract all the images before the OpenShift Container Platform installation, you must execute precache-ocp.service before executing the machine-config-daemon-pull.service and nodeip-configuration.service services. extract-ocp.sh The extract-ocp.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. When you upload the SiteConfig and the optional PolicyGenTemplates custom resources (CRs) to the Git repo, which Argo CD is monitoring, you can start the ZTP workflow by syncing the CRs with the hub cluster. 19.14.6. Troubleshooting 19.14.6.1. Rendered catalog is invalid When you download images by using a local or disconnected registry, you might see the The rendered catalog is invalid error. This means that you are missing certificates of the new registry you want to pull content from. Note The factory-precaching-cli tool image is built on a UBI RHEL image. Certificate paths and locations are the same on RHCOS. Example error Generating list of pre-cached artifacts... error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying host error=failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information. error: error rendering new refs: render reference "eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11": error resolving name : failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority Procedure Copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Update the certificates trust store: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download -r 4.11.5 --acm-version 2.5.4 \ --mce-version 2.0.4 -f /mnt \--img quay.io/custom/repository --du-profile -s --skip-imageset
[ "export ISO_IMAGE_NAME=<iso_image_name> 1", "export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1", "export OCP_VERSION=<ocp_version> 1", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}", "wget http://USD(hostname)/USD{ISO_IMAGE_NAME}", "Saving to: rhcos-4.12.1-x86_64-live.x86_64.iso rhcos-4.12.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s", "oc edit AgentServiceConfig", "- cpuArchitecture: x86_64 openshiftVersion: \"4.12\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso", "apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: <certificate> 2 registries.conf: | 3 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = <mirror_registry_url> 4 insecure = false mirror-by-digest-only = true", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> url: <iso_url> 3", "oc edit AgentServiceConfig agent", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com", "oc debug node/<node_name>", "sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry>", "Login Succeeded!", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch argocd openshift-gitops -n openshift-gitops --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/repo/initContainers/0/image\", \"value\": \"<local_registry>/<ztp_site_generate_image_ref>\"}]'", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment", "podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./out", "example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "grep -r \"ztp-deploy-wave\" out/source-crs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"", "~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml", "clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "export CLUSTERNS=example-sno", "oc create namespace USDCLUSTERNS", "export CLUSTER=<clusterName>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get AgentClusterInstall -n <cluster_name>", "oc get managedcluster", "oc describe -n openshift-gitops application clusters", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'", "oc delete policy -n <namespace> <policy_name>", "oc delete -k out/argocd/deployment", "--- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"common\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: SriovOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: PtpOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogNS.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogSubscription.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: StorageNS.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: StorageSubscription.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ReduceMonitoringFootprint.yaml policyName: \"config-policy\" - fileName: OperatorHub.yaml 3 policyName: \"config-policy\" - fileName: DefaultCatsrc.yaml 4 policyName: \"config-policy\" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: \"config-policy\" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"ztp-group\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -n 24\"", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq", "{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" }", "oc get policies -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m", "export NS=<namespace>", "oc get policy -n USDNS", "oc describe -n openshift-gitops application policies", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error", "oc get policy -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d", "oc get placementrule -n USDNS", "oc get placementrule -n USDNS <placementRuleName> -o yaml", "oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq", "oc get policy -n USDCLUSTER", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'", "oc delete clustergroupupgrades -n ztp-install USDCLUSTER", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true", "- fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:", "oc create -f cgu-remove.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get <kind> <changed_cr_name>", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h", "oc get <kind> <changed_cr_name>", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./out", "out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "mkdir -p ./site-install", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" 1 clusterImageSetNameRef: \"openshift-4.12\" 2 sshPublicKey: \"ssh-rsa AAAA...\" 3 clusters: - clusterName: \"<site_name>\" networkType: \"OVNKubernetes\" clusterLabels: 4 common: true group-du-sno: \"\" sites : \"<site_name>\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" 5 nodes: - hostName: \"example-node.example.com\" 6 role: \"master\" bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ 7 bmcCredentialsName: name: \"bmh-secret\" 8 bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" 9 rootDeviceHints: wwn: \"0x11111000000asd123\" cpuset: \"0-1,52-53\" 10 nodeNetwork: 11 interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: 12 enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator install site-1-sno.yaml /output", "site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml", "mkdir -p ./site-machineconfig", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator install -E site-1-sno.yaml /output", "site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml", "mkdir -p ./ref", "podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator config -N . /output", "ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.12.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.12.0-x86_64 2", "oc apply -f clusterImageSet-4.12.yaml", "apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2", "oc apply -f cluster-namespace.yaml", "oc apply -R ./site-install/site-sno-1", "oc get managedcluster", "oc get agent -n <cluster_name>", "oc describe agent -n <cluster_name>", "oc get agentclusterinstall -n <cluster_name>", "oc describe agentclusterinstall -n <cluster_name>", "oc get managedclusteraddon -n <cluster_name>", "oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig", "oc get managedcluster", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h", "oc get clusterdeployment -n <cluster_name>", "NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h", "oc describe agentclusterinstall -n <cluster_name> <cluster_name>", "oc delete managedcluster <cluster_name>", "oc delete namespace <cluster_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMSw1Mi01MyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root", "[crio.runtime.workloads.management] activation_annotation = \"target.workload.openshift.io/management\" annotation_prefix = \"resources.workload.openshift.io\" resources = { \"cpushares\" = 0, \"cpuset\" = \"0-1,52-53\" } 1", "{ \"management\": { \"cpuset\": \"0-1,52-53\" 1 } }", "oc debug node/example-sno-1", "sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done", "pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53", "sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done", "pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} enabled: true name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 04-accelerated-container-startup-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,#!/bin/bash
#
# Temporarily reset the core system processes's CPU affinity to be unrestricted to accelerate startup and shutdown
#
# The defaults below can be overridden via environment variables
#

# The default set of critical processes whose affinity should be temporarily unbound:
CRITICAL_PROCESSES=${CRITICAL_PROCESSES:-"crio kubelet NetworkManager conmon dbus"}

# Default wait time is 600s = 10m:
MAXIMUM_WAIT_TIME=${MAXIMUM_WAIT_TIME:-600}

# Default steady-state threshold = 2%
# Allowed values:
#  4  - absolute pod count (+/-)
#  4% - percent change (+/-)
#  -1 - disable the steady-state check
STEADY_STATE_THRESHOLD=${STEADY_STATE_THRESHOLD:-2%}

# Default steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
STEADY_STATE_WINDOW=${STEADY_STATE_WINDOW:-60}

# Default steady-state allows any pod count to be "steady state"
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
STEADY_STATE_MINIMUM=${STEADY_STATE_MINIMUM:-0}

#######################################################

KUBELET_CPU_STATE=/var/lib/kubelet/cpu_manager_state
FULL_CPU_STATE=/sys/fs/cgroup/cpuset/cpuset.cpus
KUBELET_CONF=/etc/kubernetes/kubelet.conf
unrestrictedCpuset() {
  local cpus
  if [[ -e $KUBELET_CPU_STATE ]]; then
    cpus=$(jq -r '.defaultCpuSet' <$KUBELET_CPU_STATE)
    if [[ -n "${cpus}" && -e ${KUBELET_CONF} ]]; then
      reserved_cpus=$(jq -r '.reservedSystemCPUs' </etc/kubernetes/kubelet.conf)
      if [[ -n "${reserved_cpus}" ]]; then
        # Use taskset to merge the two cpusets
        cpus=$(taskset -c "${reserved_cpus},${cpus}" grep -i Cpus_allowed_list /proc/self/status | awk '{print $2}')
      fi
    fi
  fi
  if [[ -z $cpus ]]; then
    # fall back to using all cpus if the kubelet state is not configured yet
    [[ -e $FULL_CPU_STATE ]] || return 1
    cpus=$(<$FULL_CPU_STATE)
  fi
  echo $cpus
}

restrictedCpuset() {
  for arg in $(</proc/cmdline); do
    if [[ $arg =~ ^systemd.cpu_affinity= ]]; then
      echo ${arg#*=}
      return 0
    fi
  done
  return 1
}

resetAffinity() {
  local cpuset="$1"
  local failcount=0
  local successcount=0
  logger "Recovery: Setting CPU affinity for critical processes \"$CRITICAL_PROCESSES\" to $cpuset"
  for proc in $CRITICAL_PROCESSES; do
    local pids="$(pgrep $proc)"
    for pid in $pids; do
      local tasksetOutput
      tasksetOutput="$(taskset -apc "$cpuset" $pid 2>&1)"
      if [[ $? -ne 0 ]]; then
        echo "ERROR: $tasksetOutput"
        ((failcount++))
      else
        ((successcount++))
      fi
    done
  done

  logger "Recovery: Re-affined $successcount pids successfully"
  if [[ $failcount -gt 0 ]]; then
    logger "Recovery: Failed to re-affine $failcount processes"
    return 1
  fi
}

setUnrestricted() {
  logger "Recovery: Setting critical system processes to have unrestricted CPU access"
  resetAffinity "$(unrestrictedCpuset)"
}

setRestricted() {
  logger "Recovery: Resetting critical system processes back to normally restricted access"
  resetAffinity "$(restrictedCpuset)"
}

currentAffinity() {
  local pid="$1"
  taskset -pc $pid | awk -F': ' '{print $2}'
}

within() {
  local last=$1 current=$2 threshold=$3
  local delta=0 pchange
  delta=$(( current - last ))
  if [[ $current -eq $last ]]; then
    pchange=0
  elif [[ $last -eq 0 ]]; then
    pchange=1000000
  else
    pchange=$(( ( $delta * 100) / last ))
  fi
  echo -n "last:$last current:$current delta:$delta pchange:${pchange}%: "
  local absolute limit
  case $threshold in
    *%)
      absolute=${pchange##-} # absolute value
      limit=${threshold%%%}
      ;;
    *)
      absolute=${delta##-} # absolute value
      limit=$threshold
      ;;
  esac
  if [[ $absolute -le $limit ]]; then
    echo "within (+/-)$threshold"
    return 0
  else
    echo "outside (+/-)$threshold"
    return 1
  fi
}

steadystate() {
  local last=$1 current=$2
  if [[ $last -lt $STEADY_STATE_MINIMUM ]]; then
    echo "last:$last current:$current Waiting to reach $STEADY_STATE_MINIMUM before checking for steady-state"
    return 1
  fi
  within $last $current $STEADY_STATE_THRESHOLD
}

waitForReady() {
  logger "Recovery: Waiting ${MAXIMUM_WAIT_TIME}s for the initialization to complete"
  local lastSystemdCpuset="$(currentAffinity 1)"
  local lastDesiredCpuset="$(unrestrictedCpuset)"
  local t=0 s=10
  local lastCcount=0 ccount=0 steadyStateTime=0
  while [[ $t -lt $MAXIMUM_WAIT_TIME ]]; do
    sleep $s
    ((t += s))
    # Re-check the current affinity of systemd, in case some other process has changed it
    local systemdCpuset="$(currentAffinity 1)"
    # Re-check the unrestricted Cpuset, as the allowed set of unreserved cores may change as pods are assigned to cores
    local desiredCpuset="$(unrestrictedCpuset)"
    if [[ $systemdCpuset != $lastSystemdCpuset || $lastDesiredCpuset != $desiredCpuset ]]; then
      resetAffinity "$desiredCpuset"
      lastSystemdCpuset="$(currentAffinity 1)"
      lastDesiredCpuset="$desiredCpuset"
    fi

    # Detect steady-state pod count
    ccount=$(crictl ps | wc -l)
    if steadystate $lastCcount $ccount; then
      ((steadyStateTime += s))
      echo "Steady-state for ${steadyStateTime}s/${STEADY_STATE_WINDOW}s"
      if [[ $steadyStateTime -ge $STEADY_STATE_WINDOW ]]; then
        logger "Recovery: Steady-state (+/- $STEADY_STATE_THRESHOLD) for ${STEADY_STATE_WINDOW}s: Done"
        return 0
      fi
    else
      if [[ $steadyStateTime -gt 0 ]]; then
        echo "Resetting steady-state timer"
        steadyStateTime=0
      fi
    fi
    lastCcount=$ccount
  done
  logger "Recovery: Recovery Complete Timeout"
}

main() {
  if ! unrestrictedCpuset >&/dev/null; then
    logger "Recovery: No unrestricted Cpuset could be detected"
    return 1
  fi

  if ! restrictedCpuset >&/dev/null; then
    logger "Recovery: No restricted Cpuset has been configured.  We are already running unrestricted."
    return 0
  fi

  # Ensure we reset the CPU affinity when we exit this script for any reason
  # This way either after the timer expires or after the process is interrupted
  # via ^C or SIGTERM, we return things back to the way they should be.
  trap setRestricted EXIT

  logger "Recovery: Recovery Mode Starting"
  setUnrestricted
  waitForReady
}

if [[ "${BASH_SOURCE[0]}" = "${0}" ]]; then
  main "${@}"
  exit $?
fi
 mode: 493 path: /usr/local/bin/accelerated-container-startup.sh systemd: units: - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container startup [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: accelerated-container-startup.service - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container shutdown DefaultDependencies=no [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=-1 # Steady-state window = 60s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=60 [Install] WantedBy=shutdown.target reboot.target halt.target enabled: true name: accelerated-container-shutdown.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" 1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"stable\" installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging 1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: \"curator\" curator: schedule: \"30 3 * * *\" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder 2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test 3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile 1 spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" 2 cpu: isolated: 2-51,54-103 3 reserved: 0-1,52-53 4 hugepages: defaultHugepagesSize: 1G pages: - count: 32 5 size: 1G 6 node: 0 7 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true 8", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-103 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 150 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 2 isRdma: false nicSelector: pfNames: - ens7f0 3 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 4 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 140 5 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice 6 isRdma: true nicSelector: pfNames: - ens5f0 7 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 8 priority: 10 resourceName: du_fh", "apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"false\" include.release.openshift.io/self-managed-high-availability: \"false\" include.release.openshift.io/single-node-developer: \"false\" release.openshift.io/create-only: \"true\" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false prometheusK8s: retention: 24h", "apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: disableNetworkDiagnostics: true", "spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\"", "spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name # And the cmdline_crash CPU set must match the 'isolated' set in the associated PerformanceProfile data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-103 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable", "OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}')", "DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64)", "podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc debug node/<node_name>", "sh-4.4# uname -r", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc get operatorhub cluster -o yaml", "spec: disableAllDefaultSources: true", "oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.target\\.workload\\.openshift\\.io/management}{\"\\n\"}{end}'", "certified-operators -- {\"effect\": \"PreferredDuringScheduling\"} community-operators -- {\"effect\": \"PreferredDuringScheduling\"} ran-operators 1 redhat-marketplace -- {\"effect\": \"PreferredDuringScheduling\"} redhat-operators -- {\"effect\": \"PreferredDuringScheduling\"}", "oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.workload\\.openshift\\.io/allowed}{\"\\n\"}{end}'", "default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management", "oc get -n openshift-logging ClusterLogForwarder instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: \"2022-07-19T21:51:41Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"1030342\" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open", "oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: \"2022-07-07T18:22:56Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"235796\" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed", "oc get consoles.operator.openshift.io cluster -o jsonpath=\"{ .spec.managementState }\"", "Removed", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# systemctl status chronyd", "● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)", "PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'", "sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00", "oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container", "phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533", "oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath=\"{.spec.disableDrain}{'\\n'}\"", "true", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath=\"{.items[*].status.syncStatus}{'\\n'}\"", "Succeeded", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml", "apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState status: interfaces: - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: \"8086\" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: \"8086\" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: \"8086\" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: \"8086\" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: \"8086\" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: \"8086\" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: \"8086\" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: \"8086\" vfID: 7", "oc get PerformanceProfile openshift-node-performance-profile -o yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: \"2022-07-19T21:51:31Z\" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: \"33558\" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Available - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Upgradeable - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Progressing - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile", "oc get performanceprofile openshift-node-performance-profile -o jsonpath=\"{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\\n'}{end}\"", "Available -- True Upgradeable -- True Progressing -- False Degraded -- False", "oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: \"2022-07-18T10:33:52Z\" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: \"34024\" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch", "oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'", "true", "oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION", "Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\"", "oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath=\"{ .data.config\\.yaml }\"", "grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h", "oc get route -n openshift-monitoring alertmanager-main", "oc get route -n openshift-monitoring grafana", "oc get performanceprofile -o jsonpath=\"{ .items[0].spec.cpu.reserved }\"", "0-3", "siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml └── extra-manifest └── 01-example-machine-config.yaml", "clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifestPath: extra-manifest", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.12\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml", "- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude", "clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml", "siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 extract /home/ztp --tar | tar x -C ./out", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"", "example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-dev\" namespace: \"ztp-clusters\" spec: bindingRules: dev: \"true\" mcp: \"master\" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: \"group-dev-cluster-log-ns\" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: \"group-dev-cluster-log-operator-group\" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: \"group-dev-cluster-log-sub\" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: \"group-dev-lso-ns\" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: \"group-dev-lso-operator-group\" - fileName: StorageSubscription.yaml remediationAction: inform policyName: \"group-dev-lso-sub\" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: \"group-dev-pao-ns\" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: \"group-dev-pao-cat-source\" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: \"group-dev-pao-sub\" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: \"group-dev-elasticsearch-ns\" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: \"group-dev-elasticsearch-operator-group\" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: \"group-dev-apiserver-config\" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: \"group-dev-disable-nic-lldp\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgu-test.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies", "spec: evaluationInterval: compliant: 30m noncompliant: 20s", "spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: \"sriov-sub-policy\" evaluationInterval: compliant: never noncompliant: 10s", "oc get pods -n open-cluster-management-agent-addon", "NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d", "oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb", "2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs", "#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: \"amqp://amq-router.amq-router.svc.cluster.local\"", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: HardwareEvent.yaml 1 policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043\" logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "- fileName: HardwareEvent.yaml policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "oc debug node/my-sno-node", "chroot /host", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"", "cluster=<managed_cluster_name>", "oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster", "oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster", "oc get image.config.openshift.io cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice", "oc get pv image-registry-sc", "oc get pods -n openshift-image-registry | grep registry*", "cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d", "oc debug node/sno-1.example.com", "sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom", "argocd.argoproj.io/sync-options: Replace=true", "{{hub fromConfigMap \"default\" \"test-config\" \"common-key\" hub}}", "{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) hub}}", "{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) | toBool hub}}", "{{hub (printf \"%s-name\" .ManagedClusterName) | fromConfigMap \"default\" \"test-config\" | toInt hub}}", "apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: \"8\" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: \"10\" example-sno-du_fh-vlan: \"140\" example-sno-du_mh-numVfs: \"8\" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: \"10\" example-sno-du_mh-vlan: \"150\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"site\" namespace: \"ztp-site\" spec: remediationAction: inform bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-pf\" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-pf\" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_mh", "apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: \"101\" site-2-vlan: \"234\"", "- fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data\" (printf \"%s-vlan\" .ManagedClusterName) | toInt hub}}'", "oc delete policy <policy_name> -n <policy_namespace>", "oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"", "oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgr-example.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f talm-subscription.yaml", "oc get csv -n openshift-operators", "NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.12.x Topology Aware Lifecycle Manager 4.12.x Succeeded", "oc get deploy -n openshift-operators", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s", "spec remediationStrategy: maxConcurrency: 1 timeout: 240", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: 1 - spoke1 enable: false 2 managedPolicies: 3 - talm-policy preCaching: false remediationStrategy: 4 canaries: 5 - spoke1 maxConcurrency: 2 6 timeout: 240 clusterLabelSelectors: 7 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 8 status: 9 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 10 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 11 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status:", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: \"False\" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: \"True\" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}", "oc apply -f <name>.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.12.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.12 desiredUpdate: version: 4.4.12.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.12.4 remediationAction: inform severity: low remediationAction: inform", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5", "oc create -f cgu-1.yaml", "oc get cgu --all-namespaces", "NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Not enabled\", 1 \"reason\": \"NotEnabled\", \"status\": \"False\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }", "oc get policies -A", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"All selected clusters are valid\", \"reason\": \"ClusterSelectionCompleted\", \"status\": \"True\", \"type\": \"ClustersSelected\", \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"Completed validation\", \"reason\": \"ValidationCompleted\", \"status\": \"True\", \"type\": \"Validated\", \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"remediationPlanForBatch\": { \"spoke1\": 0, \"spoke2\": 1 }, \"startedAt\": \"2022-02-25T15:54:16Z\" } }", "export KUBECONFIG=<cluster_kubeconfig_absolute_path>", "oc get subs -A | grep -i <subscription_name>", "NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.12.5 True True 43s Working towards 4.4.12.7: 71 of 735 done (9% complete)", "oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"", "oc get installplan -n <subscription_namespace>", "NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1", "oc get csv -n <operator_namespace>", "NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded", "nodes: - hostName: \"node-1.example.com\" role: \"master\" rootDeviceHints: hctl: \"0:2:0:0\" deviceName: /dev/sda ..... ..... #Disk /dev/sda: 893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/sda partitions: - mount_point: /var/recovery size: 51200 start: 800000", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"backup\": { \"clusters\": [ \"cnfdb2\", \"cnfdb1\" ], \"status\": { \"cnfdb1\": \"Succeeded\", \"cnfdb2\": \"Failed\" 1 } }, \"computedMaxConcurrency\": 1, \"conditions\": [ { \"lastTransitionTime\": \"2022-04-05T10:37:19Z\", \"message\": \"Backup failed for 1 cluster\", 2 \"reason\": \"PartiallyDone\", 3 \"status\": \"True\", 4 \"type\": \"Succeeded\" } ], \"precaching\": { \"spec\": {} }, \"status\": {}", "oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno", "ostree admin status", "ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9", "ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca", "rpm-ostree rollback -r", "/var/recovery/upgrade-recovery.sh", "systemctl reboot", "/var/recovery/upgrade-recovery.sh --resume", "/var/recovery/upgrade-recovery.sh --restart", "oc get clusterversion,nodes,clusteroperator", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.12.23 True False 86d Cluster version is 4.4.12.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.12.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.12.23 True False False 86d ...........", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"InProgress\", \"status\": \"False\", \"type\": \"PrecachingSucceeded\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 1 \"cnfdb2\" ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\" \"cnfdb2\": \"Succeeded\"} } }", "oc get jobs,pods -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingSucceeded\" 1 }", "oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>", "oc apply -f <ClusterGroupUpgradeCR_YAML>", "oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'", "[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h", "oc get pod -n openshift-operators", "NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2", "oc get managedcluster --selector=upgrade=true 1", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true", "oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'", "[\"spoke1\", \"spoke3\"]", "oc get managedcluster --selector=upgrade=true", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "oc get jobs,pods -n openshift-talo-pre-cache", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'", "{\"maxConcurrency\":2, \"timeout\":240}", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'", "2", "oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'", "{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"Missing managed policies:[policyList]\", \"reason\":\"NotAllManagedPoliciesExist\", \"status\":\"False\", \"type\":\"Validated\"}", "oc get cgu lab-upgrade -oyaml", "status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default", "oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'", "[[\"spoke2\", \"spoke3\"]]", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "OCP_RELEASE_NUMBER=<release_version>", "ARCHITECTURE=<server_architecture>", "DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"", "DIGEST_ALGO=\"USD{DIGEST%%:*}\"", "DIGEST_ENCODED=\"USD{DIGEST#*:}\"", "SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)", "cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF", "curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.12 -o ~/upgrade-graph_stable-4.12", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: \"platform-upgrade-prep\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: \"platform-upgrade-prep\" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: \"platform-upgrade\" metadata: name: version spec: channel: \"stable-4.12\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.12 desiredUpdate: version: 4.12.4 status: history: - version: 4.12.4 state: \"Completed\"", "oc get policies -A | grep platform-upgrade", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.12 1 updateStrategy: 2 registryPoll: interval: 1h", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"fec-catsrc-policy\" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: \"subscriptions-fec-policy\" spec: channel: \"stable\" source: certified-operators", "oc get policies -A | grep -E \"catsrc-policy|subscription\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1", "oc apply -f cgu-operator-upgrade-prep.yml", "oc get policies -A | grep -E \"catsrc-policy\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-operator-upgrade.yml", "oc get policy common-subscriptions-policy -n <policy_namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'", "oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq", "[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olredhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace spec: source: redhat-operators-disconnected-v2 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true", "oc apply -f cgu-platform-operator-upgrade-prep.yml", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-operator-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get jobs,pods -n openshift-talm-pre-cache", "oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: PaoSubscriptionNS.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave", "oc get policy -n ztp-common common-subscriptions-policy", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240", "mkdir -p ./update", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./update", "oc get managedcluster -l 'local-cluster!=true'", "oc label managedcluster -l 'local-cluster!=true' ztp-done=", "oc delete -f update/argocd/deployment/clusters-app.yaml", "oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge", "oc delete -f update/argocd/deployment/policies-app.yaml", "├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file update/argocd/deployment/argocd-openshift-gitops-patch.json", "oc apply -k update/argocd/deployment", "oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq", "{\"daemonNodeSelector\":{\"node-role.kubernetes.io/master\":\"\"}} 1", "oc get sriovoperatorconfig/default -n openshift-sriov-network-operator -ojsonpath='{.spec}' | jq", "{\"configDaemonNodeSelector\":{\"node-role.kubernetes.io/worker\":\"\"},\"disableDrain\":false,\"enableInjector\":true,\"enableOperatorWebhook\":true} 1", "spec: - fileName: PtpOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" - fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-sno-workers\" namespace: \"example-sno\" spec: bindingRules: sites: \"example-sno\" 1 mcp: \"worker\" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: \"config-policy\" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: \"4-47\" reserved: \"0-3\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker", "cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF", "nodes: - hostName: \"example-node2.example.com\" role: \"worker\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node2-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"AA:BB:CC:DD:EE:11\" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: v1 data: password: \"password\" username: \"username\" kind: Secret metadata: name: \"example-node2-bmh-secret\" namespace: example-sno type: Opaque", "oc get ppimg -n example-sno", "NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated", "oc get bmh -n example-sno", "NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1", "oc get agent -n example-sno --watch", "NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done", "oc get managedclusterinfo/example-sno -n example-sno -o jsonpath='{range .status.nodeList[*]}{.name}{\"\\t\"}{.conditions}{\"\\t\"}{.labels}{\"\\n\"}{end}'", "example-sno [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/master\":\"\",\"node-role.kubernetes.io/worker\":\"\"} example-node2 [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/worker\":\"\"}", "podman pull quay.io/openshift-kni/telco-ran-tools:latest", "podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v", "factory-precaching-cli version 20221018.120852+main.feecf17", "curl --globoff -H \"Content-Type: application/json\" -H \"Accept: application/json\" -k -X GET --user USD{username_password} https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Image\": \"http://[USDHTTPd_IP]/RHCOS-live.iso\"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Boot\":{ \"BootSourceOverrideEnabled\": \"Once\", \"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk", "wipefs -a /dev/nvme0n1", "/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa", "podman run -v /dev:/dev --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli partition \\ 1 -d /dev/nvme0n1 \\ 2 -s 250 3", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part", "gdisk -l /dev/nvme0n1", "GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data", "lsblk -f /dev/nvme0n1", "NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071", "mount /dev/nvme0n1p1 /mnt/", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1", "taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help", "oc get csv -A | grep -i advanced-cluster-management", "open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded", "oc get csv -A | grep -i multicluster-engine", "multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded", "mkdir /root/.docker", "cp config.json /root/.docker/config.json 1", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- factory-precaching-cli download \\ 1 -r 4.12.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f Summary: Release: 4.12.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83", "ls -l /mnt 1", "-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.12.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s 7", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 Summary: Release: 4.12.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.12.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --generate-imageset 8", "Generated /mnt/imageset.yaml", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.0 1 maxVersion: 4.12.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.12' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.12 packages: - name: sriov-fec 11 channels: - name: 'stable'", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.12 packages: - name: sriov-fec channels: - name: 'stable'", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.12.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --skip-imageset 8", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.12.0 --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt --img quay.io/custom/repository --du-profile -s --skip-imageset", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-5g-lab\" namespace: \"example-5g-lab\" spec: baseDomain: \"example.domain.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"img4.9.10-x86-64-appsub\" sshPublicKey: \"ssh-rsa ...\" clusters: - clusterName: \"sno-worker-0\" clusterLabels: group-du-sno: \"\" common-411: true sites : \"example-5g-lab\" vendor: \"OpenShift\" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: \"OVNKubernetes\" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-images.service\\nBindsTo=precache-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-images.service\" }, { \"name\": \"precache-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached images in discovery stage\\nAfter=var-mnt.mount\\nBefore=agent.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ai.sh\\n#TimeoutStopSec=30\\n\\n[Install]\\nWantedBy=multi-user.target default.target\\nWantedBy=agent.service\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ai.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200\" } }, { \"overwrite\": true, \"path\": \"/usr/local/bin/agent-fix-bz1964591\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true\" } } ] } }' nodes: - hostName: \"snonode.sno-worker-0.example.domain.redhat.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"worker0-bmh-secret\" bootMACAddress: \"e4:43:4b:bd:90:46\" bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '[\"--save-partlabel\", \"data\"]' ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-ocp-images.service\\nBindsTo=precache-ocp-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-ocp-images.service\" }, { \"name\": \"precache-ocp-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached OCP images into containers storage\\nAfter=var-mnt.mount\\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ocp.sh\\nTimeoutStopSec=60\\n\\n[Install]\\nWantedBy=multi-user.target\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ocp.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200\" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: \"AA:BB:CC:11:22:33\" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"ens1f0\" macAddress: \"AA:BB:CC:11:22:33\"", "OPTIONS: -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL --save-partlabel <lx> Save partitions with this label glob --save-partindex <id> Save partitions with this number or range --insecure-ignition Allow Ignition URL without HTTPS or hash", "Generating list of pre-cached artifacts error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying next host error=failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run \"oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME\" for more information. error: error rendering new refs: render reference \"eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11\": error resolving name : failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.11.5 --acm-version 2.5.4 --mce-version 2.0.4 -f /mnt \\--img quay.io/custom/repository --du-profile -s --skip-imageset" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/clusters-at-the-network-far-edge
Chapter 7. Adding Storage for Red Hat Virtualization
Chapter 7. Adding Storage for Red Hat Virtualization Add storage as data domains in the new environment. A Red Hat Virtualization environment must have at least one data domain, but adding more is recommended. Add the storage you prepared earlier: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Important If you are using iSCSI storage, new data domains must not use the same iSCSI target as the self-hosted engine storage domain. Warning Creating additional data domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine. 7.1. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 7.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally to the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 7.3. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 7.4. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Adding_Storage_Domains_to_RHV_SHE_cli_deploy
function::ns_pid
function::ns_pid Name function::ns_pid - Returns the ID of a target process as seen in a pid namespace Synopsis Arguments None Description This function returns the ID of a target process as seen in the target pid namespace.
[ "ns_pid:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ns-pid
Chapter 12. Managing organizations
Chapter 12. Managing organizations When integrating with a third party like a customer or business partner, you might want to manage their identities separately from others and build a unified and secure experience throughout your business ecosystem when they interact with a realm. In a realm, an organization represents these third parties so that a realm or an organization administrator can manage the entire lifecycle of its members and how they authenticate and authorize to a realm, on a per-organization basis. The organization is the entry point to start using the IAM capabilities of Red Hat build of Keycloak to also address Business-to-Business (B2B) use cases. It enables multi-tenancy within a realm so that users can have access to protected resources from a realm but with a more restricted and controlled context, that context being the organization to which they belong. Red Hat build of Keycloak Organizations is a feature that enables support for organizations in Red Hat build of Keycloak. For now, it provides some of the core capabilities needed to manage organizations such as: Manage members Onboard organization members using invitation links Onboard organization members by federating their identities through identity brokering Identity-first login and organization-specific steps when authenticating in the scope of an organization Propagate organization-specific claims to applications through tokens for authorization purposes 12.1. Enabling organizations in Red Hat build of Keycloak To use organizations, you have to enable the feature for the current realm. Procedure Click Realm Settings in the menu. Toggle Organizations to On . Click Save Enabling Organizations Once the feature is enabled, you are able to manage organizations through the Organizations section available from the menu. 12.2. Managing an organization From the Organizations section, you can manage all the organizations in your realm. Managing organizations 12.2.1. Creating an organization Procedure Click Create Organization . Creating organization An organization has the following settings: Name A user-friendly name for the organization. The name is unique within a realm. Alias An alias for this organization, used to reference the organization internally. The alias is unique within a realm and must be URL-friendly, so characters not usually allowed in URLs will not be allowed in the alias. If not set, Red Hat build of Keycloak will attempt to use the name as the alias. If the name is not URL-friendly, you will get an error and will be asked to specify an alias. Once defined, the alias cannot be changed afterwards. Redirect URL After completing registration or accepting an invitation to the organization sent via email, the user is automatically redirected to the specified redirect url. If left empty, the user will be redirected to the account console by default. Domains A set of one or more domains that belongs to this organization. A domain cannot be shared by different organizations within a realm. Description A free-text field to describe the organization. Once you create an organization, you can manage the additional settings that are described in the following sections: Manage attributes Manage members Manage identity providers 12.2.2. Understanding organization domains When managing an organization, the domain associated with an organization plays an important role in how organization members authenticate to a realm and how their profiles are validated. One of the key roles of a domain is to help to identify the organizations where a user is a member. By looking at their email address, Red Hat build of Keycloak will match a corresponding organization using the same domain and eventually change the authentication flow based on the organization requirements. The domain also allows organizations to enforce that users are not allowed to use a domain in their emails other than those associated with an organization. This restriction is especially useful when users, and their identities, are federated from identity providers associated with an organization and you want to force a specific email domain for their email addresses. 12.2.3. Disabling an organization To disable an organization, toggle Enabled to Off . Disabling organization When an organization is disabled, you can still manage it through the management interfaces, but the organization members cannot authenticate to the realm, including authenticating through the identity providers associated with the organization as they are also automatically disabled. However, the unmanaged members of an organization are still able to authenticate to the realm as they are also realm users, but tokens will not hold metadata about their relationship with an organization that is disabled. For more details about managed and unmanaged users, see Managed and unmanaged members section. 12.2.4. Deleting an organization To delete an organization, click the Delete action for the corresponding organization in the listing page or when editing an organization. Deleting organization When removing an organization, all data associated with it will be deleted, including any managed member. Unmanaged users and identity providers remain in the realm, but they are no longer linked to the organization. For more details about managed and unmanaged users, see Managed and unmanaged members . 12.3. Managing attributes An administrator can store additional metadata about an organization using attributes. An organization attribute is a key/value pair that can hold multiple string values. For that, click the Attributes tab and set any attribute you want by providing a key and a value. To provide multiple values for the same attribute, and key, just add another attribute with the same key but with a different value. Managing organization attributes 12.4. Managing members An organization member is basically a realm user but with a link to a single organization. They are logically separated from other users in a realm so that you know exactly which users belong to an organization. There are different ways to add, or onboard, a member to an organization: Adding an existing realm user as a member Through an identity provider associated with an organization Sending an invitation to create a new account Sending an invitation to an existing user to join an organization Once a member of an organization, that user's account can be managed just like any regular account in a realm by accessing the Users section in the menu. However, you can narrow the users to only those associated with an organization by accessing the Members tab when managing an organization. In this tab, you have a list of all the organization members and actions to add new members and to edit and remove existing ones. Managing organization members 12.4.1. Managed and unmanaged members When managing members, consider how their relationship with an organization affects the lifecycle of their accounts. Members can join an organization through different flows and each flow indicates the strength of the link between their accounts and the organization. There are two types of members: Managed Unmanaged Managed members are those managed by the organization, and they cannot exist outside of their organization. For instance, consider an account created through an identity provider associated with an organization. That account does not belong to a realm as it was federated from the organization. In this case, the single source of truth for the identity is the organization and its lifecycle is controlled by the organization. If you remove the organization or the member, the account is also removed from the realm. On the other hand, unmanaged members are those that can exist without the organization. For instance, when adding an existing realm user to an organization, the account belongs to the realm first and foremost and eventually linked to an organization. In this case, removing an organization or a member will not remove the account from the realm; the realm is the single-source of truth for the identity. 12.4.2. Adding an existing realm user as a member An existing realm user can join an organization by selecting that user from a list and adding the user to the organization. Procedure Click Add member . Click Add realm user . Select one or more users and click Add to add them to the organization. Adding a realm user Once a user is a member of the organization, that user is able to authenticate to the realm just like a regular user and using any credential supported by the realm. 12.4.3. Inviting users An administrator can email users to join an organization. Procedure Click Add member . Click Invite member . Provide an email address Click Send Inviting members Optionally, you can also provide a value for the First name and Last name fields for a more personalized email message using a greeting message with the first and last names of the person receiving the email. An invitation is basically an email sent with a link that the person should click to perform the necessary steps to join an organization. These steps depend on whether the person already has an account in the realm or if a new account should be created before joining the organization. If the email maps to an existing user in a realm, the steps the user will follow are basically about confirming the intention to join the organization. On the other hand, if no user is associated with the given email address, the steps will involve creating a new account through the realm's self-registration flow. In this case, the user is forced to provide the same email address used to send the invitation. 12.4.4. Onboarding members using an Identity Provider An organization might have its own identity provider as the single source of truth for their identities. In this case, users federated from the identity provider are automatically added as a member of the organization. When users join an organization through an identity provider associated with an organization, they are automatically marked as managed members. In this case, they will go through the broker login flows configured in the realm and join the organization automatically once they successfully authenticate. Onboarding new members through an identity provider can be done by either automatically redirecting the user to an organization's identity provider or by selecting the identity provider when at the login page. In both cases, once the user provides the email, Red Hat build of Keycloak will try to match an organization based on the email domain. In case the email domain matches the organization, and an identity provider is associated with the same domain and the Redirect when email domain matches setting is enabled, the user is automatically redirected to the identity provider. Once the user authenticates at the identity provider and completes the first broker login flow, the user is automatically added as an organization member. On the other hand, if Redirect when email domain matches is not enabled, but the identity provider is configured not to Hide on login page , the user can select the identity provider and then be redirected to the identity provider to continue the onboarding process. For more details, see Managing Identity Providers . 12.4.5. Removing a member You can remove a member from an organization. From the action menu to the member you want to remove, click Remove . When removing a member from an organization, remember that the user may or may not be removed from a realm depending on if that user is managed or unmanaged member, respectively. For more details, see Managed and unmanaged members . 12.5. Managing identity providers An organization might have its own identity provider as the single source of truth for their identities. In this case, you want to configure the organization to authenticate users using the organization's identity provider, federate their identities, and finally add them as a member of the organization. An organization can have one or more identity providers associated with it so that they can authenticate their users from different sources and enforce different constraints on each of them. Before you can link an identity provider to an organization, you create an organization at the realm level from the Identity Providers section in the menu. You can link any of the built-in social and identity providers available in the realm to an organization. 12.5.1. Linking an identity provider to an organization An identity provider can be linked to an organization from the Identity providers tab. If identity providers already exist, you see a list of them and options to search, edit, or unlink from the organization. Organization identity providers Procedure Click Link identity provider Select an Identity provider Set the appropriate settings Click Save Linking identity provider An identity provider has the following settings: Identity provider The identity provider you want to link to the organization. An identity provider can only be linked to a single organization. Domain The domain from the organization that you want to link with the identity provider. Hide on login page If this identity provider should be hidden in login pages when the user is authenticating in the scope of the organization. Redirect when email domain matches If members should be automatically redirected to the identity provider when their email domain matches the domain set to the identity provider. Once linked to an organization, the identity provider can be managed just like any other in a realm by accessing the Identity Providers section in the menu. However, the options herein described are only available when managing the identity provider in the scope of an organization. The only exception is the Hide on login page option that is present here for convenience. 12.5.2. Editing a linked identity provider You can edit any of the organization-related settings of a linked identity provider at any time. Procedure In the menu, click Organizations and go to the Identity providers tab. Locate the identity provider in the list. You can use the search option for this step. Click the action button (three dots) at the end of the line. Click Edit . Make the necessary changes. Click Save . Editing linked identity provider 12.5.3. Unlinking an identity provider from an organization When an identity provider is unlinked from an organization, it remains available as a realm-level provider that is no longer ssociated with an organization. To delete the unlinked provider, use the Identity Providers section in the menu. Procedure In the menu, click Organizations and go to the Identity providers tab. Locate the identity provider in the list. You can use the search capabilities for this step. Click the action button (three dots) at the end of the line. Click Unlink provider . Unlinking identity provider 12.6. Authenticating members When you enable organizations for a realm, user authentication is changed. If the user is recognized to be authenticating in the context of an organization, the authentication flow changes on a per-organization basis. When a realm is created, the authentication flows are automatically updated to enable specific steps to authenticate and onboard organization members. The authentication flows updated are: browser first broker login The main change to the browser flow is that it defaults to an identity-first login so that users are identified before prompting for their credentials. Concerning the first broker login flow, the main change is automatically adding the users as organization members once they authenticate through the identity provider associated with an organization and successfully complete the flow. The choice to use an identity-first login or not depends on the existence of an organization in a realm. If no organizations exist, the user follows the usual steps to authenticate using the username and password, or any other step configured in the browser flow. Otherwise, the user is asked first for a username or email to continue authenticating to a realm. The identity-first login main goal is to identify the user: Is the user an existing or a new user? Is the user a member of any organization within a realm? If an organization member, is the user linked to any identity provider associated with the organization? Depending on the outcome when identifying the user, the authentication flow changes to either proceed with authentication by asking for the user's credentials or eventually redirect the user automatically to authenticate within the organization security boundaries through an identity provider. 12.6.1. Understanding the identity-first login In addition to identifying the user once the username is provided, the identity-first login is also responsible for: Matching an email domain to an organization. Deciding if the authentication flow should continue or not if an account already exists for the username provided Deciding how the user should be authenticated depending on how the domains and the identity providers are configured to an organization Seamlessly authenticating users through an identity provider associated with an organization if the email domain matches the domain set to the identity provider The identity-first login provides the same capabilities that are provided by the usual login page with the username and password fields. Users can still self-register by clicking the register link or choose any identity or social broker that is not linked to an organization in that realm. Identity-first login page In the case of a user that does not exist, if that user tries to authenticate using an email domain that matches an organization domain, the identity-first login page appears again with a message that the username provided is not valid. At this point, no need exists to ask the user for credentials. Identity-first when user does not exist Several options exist to register the user allowing that user to authenticate to the realm and join an organization. If the realm has the self-registration setting enabled, the user can click the Register link at the identity-first login page and create an account at the realm. After that, the administrator can send an invitation link to the user or manually add the user as a member of an organization. For more details, see Managing members . If the organization has an identity provider without a domain and the Hide on login page setting is OFF , users can also click the identity provider link at the identity-first login page to automatically create an account and join an organization once they authenticate through the identity provider. For more details, see Managing identity providers . In a similar situation to the section, the organization may have an identity provider set with one of the organization domains. In this situation, the user is redirected to the identity provider if that user's email matches a specific domain from the organization. Once the flow completes, an account is created and the user joins the organization. 12.6.2. Configuring existing authentication flows As previously mentioned for new realms, authentication flows are automatically updated with the necessary steps to support organizations and authenticate their members. For existing realms, in addition to enabling organizations to the realm, you also need to manually update your existing (custom) authenticating flows. Change the browser flow by following these steps: Procedure Duplicate the current flow bound to the Browser flow binding type to avoid breaking the flow you are currently using Click Add sub-flow and give it a name such as My Organization Move the newly added My Organization sub-flow to execute right after the Identity Provider Redirector execution step. The main point here is that the sub-flow should happen before any other sub-flow or execution step that authenticates the user using whatever credentials you support in your realm. Once added, change the Requirement to Alternative . Click Add sub-flow in the My Organization sub-flow and give it a name such as My Organization - Conditional . Once added, change the Requirement to Conditional . Click Add condition in the My Organization - Conditional sub-flow and select Condition - user configured . Once added, change the Requirement to Required . Click Add step in the My Organization - Conditional sub-flow and select the *Organization Identity-First Login execution step. Once added, change the Requirement to Alternative . Bind the authentication flow to the Browser binding type. Organizations browser flow Once you enable the Organizations setting to the realm and create at least a single organization, you should be able to see the identity-first login page and start using organizations in your realm. Change the first broker login flow by following these steps: Procedure Duplicate the current flow bound to the First broker login flow bind type to avoid breaking the flow you are currently using Click Add sub-flow and give it a name such as Organization Member - Conditional . Once added, change the Requirement to Conditional . Click Add condition in the Organization Member - Conditional sub-flow and select Condition - user configured . Once added, change the Requirement to Required . Click Add step in the Organization Member - Conditional sub-flow and select the Organization Member Onboard execution step. Once added, change the Requirement to Required . Bind the authentication flow to the First broker login binding type. Organizations first broker flow You should now be able to authenticate using any identity provider associated with an organization and have the user joining the organization as a member as soon as they complete the first browser login flow. 12.6.3. Configuring how users authenticate If the flow supports organizations, you can configure some of the steps to change how users authenticate to the realm. For example, some use cases will require users to authenticate to a realm only if they are a member of any or a specific organization in the realm. To enable this behavior, you need to enable the Requires user membership setting on the Organization Identity-First Login execution step by clicking on its settings. If enabled, and after the user provides the username or email in the identity-first login page, the server will try to resolve a organization where the user is a member by looking at any existing membership or based on the semantics of the organization scope, if requested by the client. If not a member of an organization, an error page will be shown. 12.7. Mapping organization claims To map organization-specific claims into tokens, a client needs to request the organization scope when sending authorization requests to the server. When authenticating in the context of an organization, clients can request the organization scope to map information about the organizations where the user is a member. As a result, the token will contain a claim as follows: "organization": { "testcorp": { "id": "42c3e46f-2477-44d7-a85b-d3b43f6b31fa", "attr1": [ "value1" ] } } The organization claim can be used by clients (for example, from ID Tokens) and resource servers (for example, from access tokens) to authorize access to protected resources based on the organization where the user is a member. The organization scope is a built-in optional client scope at the realm. Therefore, this scope is added to any client created in the realm by default. It also defines the Organization Membership mapper that controls how the organization membership information is mapped to the tokens. Note By default, the organization id and attributes are not included in the organization claim. To include them, edit the mapper and enable the Add organization id and Add organization attributes options, respectively. Including attributes in the organization claim The organization scope is requested using different formats: Format Description organization Maps to a single organization if the user is a member of a single organization. Otherwise, if a member of multiple organizations, the user will be prompted to select an organization when authenticating to the realm. organization:<alias> Maps to a single organization with the given alias. organization:* Maps to all organizations the user is a member of.
[ "\"organization\": { \"testcorp\": { \"id\": \"42c3e46f-2477-44d7-a85b-d3b43f6b31fa\", \"attr1\": [ \"value1\" ] } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/managing_organizations
8.2. Using Hardware Security Modules with Subsystems
8.2. Using Hardware Security Modules with Subsystems The Certificate System supports the nCipher nShield Connect XC hardware security module (HSM) and Thales Luna HSM by default. Certificate System-supported HSMs are automatically added to the pkcs11.txt database with the modutil command during the pre-configuration stage of the installation, if the PKCS #11 library modules are in the specified installation paths. Important Certain deployments require to setup their HSM to use FIPS mode. 8.2.1. Enabling the FIPS Mode on an HSM To enable FIPS Mode on HSMs, please refer to your HSM vendor's documentation for specific instructions. Important nCipher HSM On a nCipher HSM, the FIPS mode can only be enabled when generating the Security World, this cannot be changed afterwards. While there is a variety of ways to generate the Security World, the preferred method is always to use the new-world command. For guidance on how to generate a FIPS-compliant Security World, please follow the nCipher HSM vendor's documentation. LunaSA HSM Similarly, enabling the FIPS mode on a Luna HSM must be done during the initial configuration, since changing this policy zeroizes the HSM as a security measure. For details, please refer to the Luna HSM vendor's documentation. 8.2.2. Verifying if FIPS Mode is Enabled on an HSM This section describes how to verify if FIPS mode is enabled for certain HSMs. For other HSMs, see the hardware manufacturer's documentation. 8.2.2.1. Verifying if FIPS Mode is Enabled on an nCipher HSM Note Please refer to your HSM vendor's documentation for the complete procedure. To verify if the FIPS mode is enabled on an nCipher HSM, enter: With older versions of the software, if the StrictFIPS140 is listed in the state flag, the FIPS mode is enabled. In newer vesions, it is however better to check the new mode line and look for fips1402level3 . In all cases, there should also be an hkfips key present in the nfkminfo output. 8.2.2.2. Verifying if FIPS Mode is Enabled on a Luna SA HSM Note Please refer to your HSM vendor's documentation for the complete procedure. To verify if the FIPS mode is enabled on a Luna SA HSM: Open the lunash management console Use the hsm show command and verify that the output contains the text The HSM is in FIPS 140-2 approved operation mode. : 8.2.3. Adding or Managing the HSM Entry for a Subsystem During installation, when an HSM is selected as the default token where appropriate HSM-specific parameters are set in the configuration file passed to the pkispawn command, the following parameter is added to the /var/lib/pki/ instance_name /conf/password.conf file for the HSM password: 8.2.4. Setting up SELinux for an HSM If you want to install Certificate System with an Hardware Security Modules (HSM) and SELinux is running in enforcing mode, certain HSMs require that you manually update SELinux settings before you can install Certificate System. The following section describes the required actions for supported HSMs: nCipher nShield Connect XC After you installed the HSM and before you start installing Certificate System: Reset the context of files in the /opt/nfast/ directory: Restart the nfast software. Thales Luna HSM No SELinux-related actions are required before you start installing Certificate System. 8.2.5. Installing a Subsystem Using nCipher nShield Connect XC HSM To install a subsystem instance that use nCipher nShield Connect XC HSM, follow this procedure: Prepare an override file, which corresponds to your particular deployment. The following default_hms.txt file is an example of such an override file: Note This file serves only as an example of an nCipher HSM override configuration file -- numerous other values can be overridden including default hashing algorithms. Also, only one of the [CA], [KRA], [OCSP], [TKS], or [TPS] sections will be utilized depending upon the subsystem invocation specified on the pkispawn command-line. Example 8.1. An Override File Sample to Use with nCipher HSM ############################################################################### ############################################################################### ############################################################################### ## ## ## EXAMPLE: Configuration File used to override '/etc/pki/default.cfg' ## ## when using an nCipher Hardware Security Module (HSM): ## ## ## ## ## ## # modutil -dbdir . -list ## ## ## ## Listing of PKCS #11 Modules ## ## ----------------------------------------------------------- ## ## 1. NSS Internal PKCS #11 Module ## ## slots: 2 slots attached ## ## status: loaded ## ## ## ## slot: NSS Internal Cryptographic Services ## ## token: NSS Generic Crypto Services ## ## ## ## slot: NSS User Private Key and Certificate Services ## ## token: NSS Certificate DB ## ## ## ## 2. nfast ## ## library name: /opt/nfast/toolkits/pkcs11/libcknfast.so ## ## slots: 2 slots attached ## ## status: loaded ## ## ## ## slot: <serial_number> Rt1 ## ## token: accelerator ## ## ## ## slot: <serial_number> Rt1 slot 0 ## ## token: <HSM_token_name> ## ## ----------------------------------------------------------- ## ## ## ## ## ## Based on the example above, substitute all password values, ## ## as well as the following values: ## ## ## ## <hsm_libfile>=/opt/nfast/toolkits/pkcs11/libcknfast.so ## ## <hsm_modulename>=nfast ## ## <hsm_token_name>=NHSM-CONN-XC ## ## ## ############################################################################### ############################################################################### ############################################################################### [DEFAULT] ########################## # Provide HSM parameters # ########################## pki_hsm_enable=True pki_hsm_libfile=<hsm_libfile> pki_hsm_modulename=<hsm_modulename> pki_token_name=<hsm_token_name> pki_token_password=<pki_token_password> ######################################## # Provide PKI-specific HSM token names # ######################################## pki_audit_signing_token=<hsm_token_name> pki_ssl_server_token=<hsm_token_name> pki_subsystem_token=<hsm_token_name> ################################## # Provide PKI-specific passwords # ################################## pki_admin_password=<pki_admin_password> pki_client_pkcs12_password=<pki_client_pkcs12_password> pki_ds_password=<pki_ds_password> ##################################### # Provide non-CA-specific passwords # ##################################### pki_client_database_password=<pki_client_database_password> ############################################################### # ONLY required if specifying a non-default PKI instance name # ############################################################### #pki_instance_name=<pki_instance_name> ############################################################## # ONLY required if specifying non-default PKI instance ports # ############################################################## #pki_http_port=<pki_http_port> #pki_https_port=<pki_https_port> ###################################################################### # ONLY required if specifying non-default 389 Directory Server ports # ###################################################################### #pki_ds_ldap_port=<pki_ds_ldap_port> #pki_ds_ldaps_port=<pki_ds_ldaps_port> ###################################################################### # ONLY required if PKI is using a Security Domain on a remote system # ###################################################################### #pki_ca_hostname=<pki_ca_hostname> #pki_issuing_ca_hostname=<pki_issuing_ca_hostname> #pki_issuing_ca_https_port=<pki_issuing_ca_https_port> #pki_security_domain_hostname=<pki_security_domain_hostname> #pki_security_domain_https_port=<pki_security_domain_https_port> ########################################################### # ONLY required for PKI using an existing Security Domain # ########################################################### # NOTE: pki_security_domain_password == pki_admin_password # of CA Security Domain Instance #pki_security_domain_password=<pki_admin_password> [Tomcat] ############################################################## # ONLY required if specifying non-default PKI instance ports # ############################################################## #pki_ajp_port=<pki_ajp_port> #pki_tomcat_server_port=<pki_tomcat_server_port> [CA] ####################################### # Provide CA-specific HSM token names # ####################################### pki_ca_signing_token=<hsm_token_name> pki_ocsp_signing_token=<hsm_token_name> ########################################################################### # ONLY required if 389 Directory Server for CA resides on a remote system # ########################################################################### #pki_ds_hostname=<389 hostname> [KRA] ######################################## # Provide KRA-specific HSM token names # ######################################## pki_storage_token=<hsm_token_name> pki_transport_token=<hsm_token_name> ############################################################################ # ONLY required if 389 Directory Server for KRA resides on a remote system # ############################################################################ #pki_ds_hostname=<389 hostname> [OCSP] ######################################### # Provide OCSP-specific HSM token names # ######################################### pki_ocsp_signing_token=<hsm_token_name> ############################################################################# # ONLY required if 389 Directory Server for OCSP resides on a remote system # ############################################################################# #pki_ds_hostname=<389 hostname> [TKS] ######################################## # Provide TKS-specific HSM token names # ######################################## ############################################################################ # ONLY required if 389 Directory Server for TKS resides on a remote system # ############################################################################ #pki_ds_hostname=<389 hostname> [TPS] ################################### # Provide TPS-specific parameters # ################################### pki_authdb_basedn=<dnsdomainname where hostname.b.c.d is dc=b,dc=c,dc=d> ######################################## # Provide TPS-specific HSM token names # ######################################## ############################################################################ # ONLY required if 389 Directory Server for TPS resides on a remote system # ############################################################################ #pki_ds_hostname=<389 hostname> ########################################################## # ONLY required if TPS requires a CA on a remote machine # ########################################################## #pki_ca_uri=https://<pki_ca_hostname>:<pki_ca_https_port> ####################################### # ONLY required if TPS requires a KRA # ####################################### #pki_enable_server_side_keygen=True ########################################################### # ONLY required if TPS requires a KRA on a remote machine # ########################################################### #pki_kra_uri=https://<pki_kra_hostname>:<pki_kra_https_port> ########################################################### # ONLY required if TPS requires a TKS on a remote machine # ########################################################### #pki_tks_uri=https://<pki_tks_hostname>:<pki_tks_https_port> Use the configuration file as described in Section 7.7, "Two-step Installation" . Verify that the HSM contains the following certificates: 8.2.6. Installing a Subsystem Using Thales Luna HSM To install a subsystem instance that use Thales Luna HSM, follow the procedure detailed in Section 8.2.5, "Installing a Subsystem Using nCipher nShield Connect XC HSM" . The override file should be similar to the sample provided in Example 8.1, "An Override File Sample to Use with nCipher HSM" , differing in values related to the particular deployment. The following example provides a sample LunaSA header that is to be substituted for the header of the nCipher override file and to be used with the [DEFAULT], [Tomcat], [CA], [KRA], [OCSP], [TKS], and [TPS] sections provided in the aforementioned nCipher example. Example 8.2. A Sample of the LunaSA Override File Header ############################################################################### ############################################################################### ############################################################################### ## ## ## EXAMPLE: Configuration File used to override '/etc/pki/default.cfg' ## ## when using a LunaSA Hardware Security Module (HSM): ## ## ## ## ## ## # modutil -dbdir . -list ## ## ## ## Listing of PKCS #11 Modules ## ## ----------------------------------------------------------- ## ## 1. NSS Internal PKCS #11 Module ## ## slots: 2 slots attached ## ## status: loaded ## ## ## ## slot: NSS Internal Cryptographic Services ## ## token: NSS Generic Crypto Services ## ## ## ## slot: NSS User Private Key and Certificate Services ## ## token: NSS Certificate DB ## ## ## ## 2. lunasa ## ## library name: /usr/safenet/lunaclient/lib/libCryptoki2_64.so ## ## slots: 4 slots attached ## ## status: loaded ## ## ## ## slot: LunaNet Slot ## ## token: dev-intca ## ## ## ## slot: Luna UHD Slot ## ## token: ## ## ## ## slot: Luna UHD Slot ## ## token: ## ## ## ## slot: Luna UHD Slot ## ## token: ## ## ----------------------------------------------------------- ## ## ## ## ## ## Based on the example above, substitute all password values, ## ## as well as the following values: ## ## ## ## <hsm_libfile>=/usr/safenet/lunaclient/lib/libCryptoki2_64.so ## ## <hsm_modulename>=lunasa ## ## <hsm_token_name>=dev-intca ## ## ## ############################################################################### ############################################################################### ###############################################################################
[ "/opt/nfast/bin/nfkminfo", "lunash:> hsm show FIPS 140-2 Operation: ===================== The HSM is in FIPS 140-2 approved operation mode.", "hardware- HSM_token_name = HSM_token_password", "restorecon -R /opt/nfast/", "/opt/nfast/sbin/init.d-ncipher restart", "############################################################################### ############################################################################### ############################################################################### ## ## ## EXAMPLE: Configuration File used to override '/etc/pki/default.cfg' ## ## when using an nCipher Hardware Security Module (HSM): ## ## ## ## ## ## # modutil -dbdir . -list ## ## ## ## Listing of PKCS #11 Modules ## ## ----------------------------------------------------------- ## ## 1. NSS Internal PKCS #11 Module ## ## slots: 2 slots attached ## ## status: loaded ## ## ## ## slot: NSS Internal Cryptographic Services ## ## token: NSS Generic Crypto Services ## ## ## ## slot: NSS User Private Key and Certificate Services ## ## token: NSS Certificate DB ## ## ## ## 2. nfast ## ## library name: /opt/nfast/toolkits/pkcs11/libcknfast.so ## ## slots: 2 slots attached ## ## status: loaded ## ## ## ## slot: <serial_number> Rt1 ## ## token: accelerator ## ## ## ## slot: <serial_number> Rt1 slot 0 ## ## token: <HSM_token_name> ## ## ----------------------------------------------------------- ## ## ## ## ## ## Based on the example above, substitute all password values, ## ## as well as the following values: ## ## ## ## <hsm_libfile>=/opt/nfast/toolkits/pkcs11/libcknfast.so ## ## <hsm_modulename>=nfast ## ## <hsm_token_name>=NHSM-CONN-XC ## ## ## ############################################################################### ############################################################################### ############################################################################### ######################### Provide HSM parameters # ########################## pki_hsm_enable=True pki_hsm_libfile=<hsm_libfile> pki_hsm_modulename=<hsm_modulename> pki_token_name=<hsm_token_name> pki_token_password=<pki_token_password> ######################################## Provide PKI-specific HSM token names # ######################################## pki_audit_signing_token=<hsm_token_name> pki_ssl_server_token=<hsm_token_name> pki_subsystem_token=<hsm_token_name> ################################## Provide PKI-specific passwords # ################################## pki_admin_password=<pki_admin_password> pki_client_pkcs12_password=<pki_client_pkcs12_password> pki_ds_password=<pki_ds_password> ##################################### Provide non-CA-specific passwords # ##################################### pki_client_database_password=<pki_client_database_password> ############################################################### ONLY required if specifying a non-default PKI instance name # ############################################################### #pki_instance_name=<pki_instance_name> ############################################################## ONLY required if specifying non-default PKI instance ports # ############################################################## #pki_http_port=<pki_http_port> #pki_https_port=<pki_https_port> ###################################################################### ONLY required if specifying non-default 389 Directory Server ports # ###################################################################### #pki_ds_ldap_port=<pki_ds_ldap_port> #pki_ds_ldaps_port=<pki_ds_ldaps_port> ###################################################################### ONLY required if PKI is using a Security Domain on a remote system # ###################################################################### #pki_ca_hostname=<pki_ca_hostname> #pki_issuing_ca_hostname=<pki_issuing_ca_hostname> #pki_issuing_ca_https_port=<pki_issuing_ca_https_port> #pki_security_domain_hostname=<pki_security_domain_hostname> #pki_security_domain_https_port=<pki_security_domain_https_port> ########################################################### ONLY required for PKI using an existing Security Domain # ########################################################### NOTE: pki_security_domain_password == pki_admin_password of CA Security Domain Instance #pki_security_domain_password=<pki_admin_password> ############################################################# ONLY required if specifying non-default PKI instance ports # ############################################################## #pki_ajp_port=<pki_ajp_port> #pki_tomcat_server_port=<pki_tomcat_server_port> ###################################### Provide CA-specific HSM token names # ####################################### pki_ca_signing_token=<hsm_token_name> pki_ocsp_signing_token=<hsm_token_name> ########################################################################### ONLY required if 389 Directory Server for CA resides on a remote system # ########################################################################### #pki_ds_hostname=<389 hostname> ####################################### Provide KRA-specific HSM token names # ######################################## pki_storage_token=<hsm_token_name> pki_transport_token=<hsm_token_name> ############################################################################ ONLY required if 389 Directory Server for KRA resides on a remote system # ############################################################################ #pki_ds_hostname=<389 hostname> ######################################## Provide OCSP-specific HSM token names # ######################################### pki_ocsp_signing_token=<hsm_token_name> ############################################################################# ONLY required if 389 Directory Server for OCSP resides on a remote system # ############################################################################# #pki_ds_hostname=<389 hostname> ####################################### Provide TKS-specific HSM token names # ######################################## ############################################################################ ONLY required if 389 Directory Server for TKS resides on a remote system # ############################################################################ #pki_ds_hostname=<389 hostname> ################################## Provide TPS-specific parameters # ################################### pki_authdb_basedn=<dnsdomainname where hostname.b.c.d is dc=b,dc=c,dc=d> ######################################## Provide TPS-specific HSM token names # ######################################## ############################################################################ ONLY required if 389 Directory Server for TPS resides on a remote system # ############################################################################ #pki_ds_hostname=<389 hostname> ########################################################## ONLY required if TPS requires a CA on a remote machine # ########################################################## #pki_ca_uri=https://<pki_ca_hostname>:<pki_ca_https_port> ####################################### ONLY required if TPS requires a KRA # ####################################### #pki_enable_server_side_keygen=True ########################################################### ONLY required if TPS requires a KRA on a remote machine # ########################################################### #pki_kra_uri=https://<pki_kra_hostname>:<pki_kra_https_port> ########################################################### ONLY required if TPS requires a TKS on a remote machine # ########################################################### #pki_tks_uri=https://<pki_tks_hostname>:<pki_tks_https_port>", "pkispawn -s CA -f ./ default_hsm.txt -vvv", "pkispawn -s KRA -f ./ default_hsm.txt -vvv", "pkispawn -s OCSP -f ./ default_hsm.txt -vvv", "pkispawn -s TKS -f ./ default_hsm.txt -vvv", "pkispawn -s TPS -f ./ default_hsm.txt -vvv", "certutil -L -d /etc/pki/pki-tomcat/alias -h token -f token.pwd Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI token:ca_signing CTu,Cu,Cu token:ca_ocsp_signing u,u,u token:subsystem u,u,u token:ca_audit_signing u,u,Pu token:sslserver/pki.example.com u,u,u", "############################################################################### ############################################################################### ############################################################################### ## ## ## EXAMPLE: Configuration File used to override '/etc/pki/default.cfg' ## ## when using a LunaSA Hardware Security Module (HSM): ## ## ## ## ## ## # modutil -dbdir . -list ## ## ## ## Listing of PKCS #11 Modules ## ## ----------------------------------------------------------- ## ## 1. NSS Internal PKCS #11 Module ## ## slots: 2 slots attached ## ## status: loaded ## ## ## ## slot: NSS Internal Cryptographic Services ## ## token: NSS Generic Crypto Services ## ## ## ## slot: NSS User Private Key and Certificate Services ## ## token: NSS Certificate DB ## ## ## ## 2. lunasa ## ## library name: /usr/safenet/lunaclient/lib/libCryptoki2_64.so ## ## slots: 4 slots attached ## ## status: loaded ## ## ## ## slot: LunaNet Slot ## ## token: dev-intca ## ## ## ## slot: Luna UHD Slot ## ## token: ## ## ## ## slot: Luna UHD Slot ## ## token: ## ## ## ## slot: Luna UHD Slot ## ## token: ## ## ----------------------------------------------------------- ## ## ## ## ## ## Based on the example above, substitute all password values, ## ## as well as the following values: ## ## ## ## <hsm_libfile>=/usr/safenet/lunaclient/lib/libCryptoki2_64.so ## ## <hsm_modulename>=lunasa ## ## <hsm_token_name>=dev-intca ## ## ## ############################################################################### ############################################################################### ###############################################################################" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Using_Hardware_Security_Modules_with_Subsystems
Chapter 5. Deleting OpenShift Serverless custom resource definitions
Chapter 5. Deleting OpenShift Serverless custom resource definitions After uninstalling the OpenShift Serverless, the Operator and API custom resource definitions (CRDs) remain on the cluster. You can use the following procedure to remove the remaining CRDs. Important Removing the Operator and API CRDs also removes all resources that were defined by using them, including Knative services. 5.1. Removing OpenShift Serverless Operator and API CRDs Delete the Operator and API CRDs using the following procedure. Prerequisites Install the OpenShift CLI ( oc ). You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have uninstalled Knative Serving and removed the OpenShift Serverless Operator. Procedure To delete the remaining OpenShift Serverless CRDs, enter the following command: USD oc get crd -oname | grep 'knative.dev' | xargs oc delete
[ "oc get crd -oname | grep 'knative.dev' | xargs oc delete" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/removing_openshift_serverless/deleting-serverless-crds
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Verify the rotational flag on your VMDKs before deploying object storage devices (OSDs) on them. For more information, see the knowledgebase article Override device rotational flag in ODF environment . Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Token authentication method . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/preparing_to_deploy_openshift_data_foundation
Chapter 1. Common object reference
Chapter 1. Common object reference 1.1. com.coreos.monitoring.v1.AlertmanagerList schema Description AlertmanagerList is a list of Alertmanager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Alertmanager) List of alertmanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.2. com.coreos.monitoring.v1.PodMonitorList schema Description PodMonitorList is a list of PodMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodMonitor) List of podmonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.3. com.coreos.monitoring.v1.ProbeList schema Description ProbeList is a list of Probe Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Probe) List of probes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.4. com.coreos.monitoring.v1.PrometheusList schema Description PrometheusList is a list of Prometheus Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Prometheus) List of prometheuses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.5. com.coreos.monitoring.v1.PrometheusRuleList schema Description PrometheusRuleList is a list of PrometheusRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PrometheusRule) List of prometheusrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.6. com.coreos.monitoring.v1.ServiceMonitorList schema Description ServiceMonitorList is a list of ServiceMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceMonitor) List of servicemonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.7. com.coreos.monitoring.v1.ThanosRulerList schema Description ThanosRulerList is a list of ThanosRuler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ThanosRuler) List of thanosrulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.8. com.coreos.monitoring.v1beta1.AlertmanagerConfigList schema Description AlertmanagerConfigList is a list of AlertmanagerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertmanagerConfig) List of alertmanagerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.9. com.coreos.operators.v1.OLMConfigList schema Description OLMConfigList is a list of OLMConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OLMConfig) List of olmconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.10. com.coreos.operators.v1.OperatorGroupList schema Description OperatorGroupList is a list of OperatorGroup Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorGroup) List of operatorgroups. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.11. com.coreos.operators.v1.OperatorList schema Description OperatorList is a list of Operator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Operator) List of operators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.12. com.coreos.operators.v1alpha1.CatalogSourceList schema Description CatalogSourceList is a list of CatalogSource Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CatalogSource) List of catalogsources. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.13. com.coreos.operators.v1alpha1.ClusterServiceVersionList schema Description ClusterServiceVersionList is a list of ClusterServiceVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterServiceVersion) List of clusterserviceversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.14. com.coreos.operators.v1alpha1.InstallPlanList schema Description InstallPlanList is a list of InstallPlan Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InstallPlan) List of installplans. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.15. com.coreos.operators.v1alpha1.SubscriptionList schema Description SubscriptionList is a list of Subscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Subscription) List of subscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.16. com.coreos.operators.v2.OperatorConditionList schema Description OperatorConditionList is a list of OperatorCondition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorCondition) List of operatorconditions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.17. com.github.openshift.api.apps.v1.DeploymentConfigList schema Description DeploymentConfigList is a collection of deployment configs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DeploymentConfig) Items is a list of deployment configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.18. com.github.openshift.api.authorization.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.19. com.github.openshift.api.authorization.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.20. com.github.openshift.api.authorization.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.21. com.github.openshift.api.authorization.v1.RoleList schema Description RoleList is a collection of Roles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.22. com.github.openshift.api.build.v1.BuildConfigList schema Description BuildConfigList is a collection of BuildConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BuildConfig) items is a list of build configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.23. com.github.openshift.api.build.v1.BuildList schema Description BuildList is a collection of Builds. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) items is a list of builds kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.24. com.github.openshift.api.image.v1.ImageList schema Description ImageList is a list of Image objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) Items is a list of images kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.25. com.github.openshift.api.image.v1.ImageStreamList schema Description ImageStreamList is a list of ImageStream objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStream) Items is a list of imageStreams kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.26. com.github.openshift.api.image.v1.ImageStreamTagList schema Description ImageStreamTagList is a list of ImageStreamTag objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStreamTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.27. com.github.openshift.api.image.v1.ImageTagList schema Description ImageTagList is a list of ImageTag objects. When listing image tags, the image field is not populated. Tags are returned in alphabetical order by image stream and then tag. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.28. com.github.openshift.api.oauth.v1.OAuthAccessTokenList schema Description OAuthAccessTokenList is a collection of OAuth access tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAccessToken) Items is the list of OAuth access tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.29. com.github.openshift.api.oauth.v1.OAuthAuthorizeTokenList schema Description OAuthAuthorizeTokenList is a collection of OAuth authorization tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAuthorizeToken) Items is the list of OAuth authorization tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.30. com.github.openshift.api.oauth.v1.OAuthClientAuthorizationList schema Description OAuthClientAuthorizationList is a collection of OAuth client authorizations Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClientAuthorization) Items is the list of OAuth client authorizations kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.31. com.github.openshift.api.oauth.v1.OAuthClientList schema Description OAuthClientList is a collection of OAuth clients Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClient) Items is the list of OAuth clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.32. com.github.openshift.api.oauth.v1.UserOAuthAccessTokenList schema Description UserOAuthAccessTokenList is a collection of access tokens issued on behalf of the requesting user Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (UserOAuthAccessToken) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.33. com.github.openshift.api.project.v1.ProjectList schema Description ProjectList is a list of Project objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) Items is the list of projects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.34. com.github.openshift.api.quota.v1.AppliedClusterResourceQuotaList schema Description AppliedClusterResourceQuotaList is a collection of AppliedClusterResourceQuotas Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AppliedClusterResourceQuota) Items is a list of AppliedClusterResourceQuota kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.35. com.github.openshift.api.route.v1.RouteList schema Description RouteList is a collection of Routes. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Route) items is a list of routes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.36. com.github.openshift.api.security.v1.RangeAllocationList schema Description RangeAllocationList is a list of RangeAllocations objects Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RangeAllocation) List of RangeAllocations. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.37. com.github.openshift.api.template.v1.BrokerTemplateInstanceList schema Description BrokerTemplateInstanceList is a list of BrokerTemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BrokerTemplateInstance) items is a list of BrokerTemplateInstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.38. com.github.openshift.api.template.v1.TemplateInstanceList schema Description TemplateInstanceList is a list of TemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (TemplateInstance) items is a list of Templateinstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.39. com.github.openshift.api.template.v1.TemplateList schema Description TemplateList is a list of Template objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Template) Items is a list of templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.40. com.github.openshift.api.user.v1.GroupList schema Description GroupList is a collection of Groups Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Group) Items is the list of groups kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.41. com.github.openshift.api.user.v1.IdentityList schema Description IdentityList is a collection of Identities Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Identity) Items is the list of identities kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.42. com.github.openshift.api.user.v1.UserList schema Description UserList is a collection of Users Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (User) Items is the list of users kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.43. com.github.operator-framework.api.pkg.lib.version.OperatorVersion schema Description OperatorVersion is a wrapper around semver.Version which supports correct marshaling to YAML and JSON. Type string 1.44. com.github.operator-framework.api.pkg.operators.v1alpha1.APIServiceDefinitions schema Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Schema Property Type Description owned array (APIServiceDescription) required array (APIServiceDescription) 1.45. com.github.operator-framework.api.pkg.operators.v1alpha1.CustomResourceDefinitions schema Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Schema Property Type Description owned array (CRDDescription) required array (CRDDescription) 1.46. com.github.operator-framework.api.pkg.operators.v1alpha1.InstallMode schema Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required type supported Schema Property Type Description supported boolean type string 1.47. com.github.operator-framework.operator-lifecycle-manager.pkg.package-server.apis.operators.v1.PackageManifestList schema Description PackageManifestList is a list of PackageManifest objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PackageManifest) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.48. io.cncf.cni.k8s.v1.NetworkAttachmentDefinitionList schema Description NetworkAttachmentDefinitionList is a list of NetworkAttachmentDefinition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkAttachmentDefinition) List of network-attachment-definitions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.49. io.cncf.cni.whereabouts.v1alpha1.IPPoolList schema Description IPPoolList is a list of IPPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPPool) List of ippools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.50. io.cncf.cni.whereabouts.v1alpha1.OverlappingRangeIPReservationList schema Description OverlappingRangeIPReservationList is a list of OverlappingRangeIPReservation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OverlappingRangeIPReservation) List of overlappingrangeipreservations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.51. io.k8s.api.admissionregistration.v1.MutatingWebhookConfigurationList schema Description MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MutatingWebhookConfiguration) List of MutatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.52. io.k8s.api.admissionregistration.v1.ValidatingWebhookConfigurationList schema Description ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingWebhookConfiguration) List of ValidatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.53. io.k8s.api.apps.v1.ControllerRevisionList schema Description ControllerRevisionList is a resource containing a list of ControllerRevision objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerRevision) Items is the list of ControllerRevisions kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.54. io.k8s.api.apps.v1.DaemonSetList schema Description DaemonSetList is a collection of daemon sets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DaemonSet) A list of daemon sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.55. io.k8s.api.apps.v1.DeploymentList schema Description DeploymentList is a list of Deployments. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Deployment) Items is the list of Deployments. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.56. io.k8s.api.apps.v1.ReplicaSetList schema Description ReplicaSetList is a collection of ReplicaSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicaSet) List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.57. io.k8s.api.apps.v1.StatefulSetList schema Description StatefulSetList is a collection of StatefulSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StatefulSet) Items is the list of stateful sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.58. io.k8s.api.autoscaling.v2.HorizontalPodAutoscalerList schema Description HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HorizontalPodAutoscaler) items is the list of horizontal pod autoscaler objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. 1.59. io.k8s.api.batch.v1.CronJobList schema Description CronJobList is a collection of cron jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CronJob) items is the list of CronJobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.60. io.k8s.api.batch.v1.JobList schema Description JobList is a collection of jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Job) items is the list of Jobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.61. io.k8s.api.certificates.v1.CertificateSigningRequestList schema Description CertificateSigningRequestList is a collection of CertificateSigningRequest objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CertificateSigningRequest) items is a collection of CertificateSigningRequest objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.62. io.k8s.api.coordination.v1.LeaseList schema Description LeaseList is a list of Lease objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Lease) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.63. io.k8s.api.core.v1.ComponentStatusList schema Description Status of all the conditions for the component as a list of ComponentStatus objects. Deprecated: This API is deprecated in v1.19+ Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ComponentStatus) List of ComponentStatus objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.64. io.k8s.api.core.v1.ConfigMapList schema Description ConfigMapList is a resource containing a list of ConfigMap objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConfigMap) Items is the list of ConfigMaps. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.65. io.k8s.api.core.v1.ConfigMapVolumeSource schema Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 1.66. io.k8s.api.core.v1.CSIVolumeSource schema Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Schema Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 1.67. io.k8s.api.core.v1.EndpointsList schema Description EndpointsList is a list of endpoints. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Endpoints) List of endpoints. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.68. io.k8s.api.core.v1.EnvVar schema Description EnvVar represents an environment variable present in a Container. Type object Required name Schema Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty. 1.69. io.k8s.api.core.v1.EventList schema Description EventList is a list of events. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) List of events kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.70. io.k8s.api.core.v1.EventSource schema Description EventSource contains information for an event. Type object Schema Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 1.71. io.k8s.api.core.v1.LimitRangeList schema Description LimitRangeList is a list of LimitRange items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (LimitRange) Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.72. io.k8s.api.core.v1.LocalObjectReference schema Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 1.73. io.k8s.api.core.v1.NamespaceCondition schema Description NamespaceCondition contains details about state of namespace. Type object Required type status Schema Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 1.74. io.k8s.api.core.v1.NamespaceList schema Description NamespaceList is a list of Namespaces. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Namespace) Items is the list of Namespace objects in the list. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.75. io.k8s.api.core.v1.NodeList schema Description NodeList is the whole list of all Nodes which have been registered with master. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.76. io.k8s.api.core.v1.ObjectReference schema Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Schema Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 1.77. io.k8s.api.core.v1.PersistentVolumeClaim schema Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. ..spec Description:: + PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object VolumeResourceRequirements describes the storage resource requirements for a volume. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. ..spec.dataSource Description:: + TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced ..spec.dataSourceRef Description:: + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. ..spec.resources Description:: + VolumeResourceRequirements describes the storage resource requirements for a volume. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ..status Description:: + PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources object (Quantity) allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is an alpha field and requires enabling VolumeAttributesClass feature. modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound ..status.conditions Description:: + conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array ..status.conditions[] Description:: + PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string ..status.modifyVolumeStatus Description:: + ModifyVolumeStatus represents the status object of ControllerModifyVolume operation Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. Possible enum values: - "InProgress" InProgress indicates that the volume is being modified - "Infeasible" Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified - "Pending" Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 1.78. io.k8s.api.core.v1.PersistentVolumeClaimList schema Description PersistentVolumeClaimList is a list of PersistentVolumeClaim items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolumeClaim) items is a list of persistent volume claims. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.79. io.k8s.api.core.v1.PersistentVolumeList schema Description PersistentVolumeList is a list of PersistentVolume items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolume) items is a list of persistent volumes. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.80. io.k8s.api.core.v1.PersistentVolumeSpec schema Description PersistentVolumeSpec is the specification of a persistent volume. Type object Schema Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore AWSElasticBlockStoreVolumeSource awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFilePersistentVolumeSource azureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs CephFSPersistentVolumeSource cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderPersistentVolumeSource cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md claimRef ObjectReference claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding csi CSIPersistentVolumeSource csi represents storage that is handled by an external CSI driver (Beta feature). fc FCVolumeSource fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexPersistentVolumeSource flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk glusterfs GlusterfsPersistentVolumeSource glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIPersistentVolumeSource iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. local LocalVolumeSource local represents directly-attached storage with node affinity mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs NFSVolumeSource nfs represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs nodeAffinity VolumeNodeAffinity nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk PhotonPersistentDiskVolumeSource photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource portworxVolume represents a portworx volume attached and mounted on kubelets host machine quobyte QuobyteVolumeSource quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDPersistentVolumeSource rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOPersistentVolumeSource scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos StorageOSPersistentVolumeSource storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md volumeAttributesClassName string Name of VolumeAttributesClass to which this persistent volume belongs. Empty value is not allowed. When this field is not set, it indicates that this volume does not belong to any VolumeAttributesClass. This field is mutable and can be changed by the CSI driver after a volume has been updated successfully to a new class. For an unbound PersistentVolume, the volumeAttributesClassName will be matched with unbound PersistentVolumeClaims during the binding process. This is an alpha field and requires enabling VolumeAttributesClass feature. volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume VsphereVirtualDiskVolumeSource vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 1.81. io.k8s.api.core.v1.PodList schema Description PodList is a list of Pods. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Pod) List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.82. io.k8s.api.core.v1.PodTemplateList schema Description PodTemplateList is a list of PodTemplates. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodTemplate) List of pod templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.83. io.k8s.api.core.v1.PodTemplateSpec schema Description PodTemplateSpec describes the data a pod should have when created from a template Type object Schema Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PodSpec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.84. io.k8s.api.core.v1.ReplicationControllerList schema Description ReplicationControllerList is a collection of replication controllers. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicationController) List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.85. io.k8s.api.core.v1.ResourceQuotaList schema Description ResourceQuotaList is a list of ResourceQuota items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ResourceQuota) Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.86. io.k8s.api.core.v1.ResourceQuotaSpec schema Description ResourceQuotaSpec defines the desired hard limits to enforce for Quota. Type object Schema Property Type Description hard object (Quantity) hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector ScopeSelector scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 1.87. io.k8s.api.core.v1.ResourceQuotaStatus schema Description ResourceQuotaStatus defines the enforced hard limits and observed use. Type object Schema Property Type Description hard object (Quantity) Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used object (Quantity) Used is the current observed total usage of the resource in the namespace. 1.88. io.k8s.api.core.v1.ResourceRequirements schema Description ResourceRequirements describes the compute resource requirements. Type object Schema Property Type Description claims array (ResourceClaim) Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 1.89. io.k8s.api.core.v1.Secret schema Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data object (string) Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4 immutable boolean Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata stringData object (string) stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API. type string Used to facilitate programmatic handling of secret data. More info: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types 1.90. io.k8s.api.core.v1.SecretList schema Description SecretList is a list of Secret. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.91. io.k8s.api.core.v1.SecretVolumeSource schema Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 1.92. io.k8s.api.core.v1.ServiceAccountList schema Description ServiceAccountList is a list of ServiceAccount objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceAccount) List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.93. io.k8s.api.core.v1.ServiceList schema Description ServiceList holds a list of services. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Service) List of services kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.94. io.k8s.api.core.v1.Toleration schema Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Schema Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 1.95. io.k8s.api.core.v1.TopologySelectorTerm schema Description A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future. Type object Schema Property Type Description matchLabelExpressions array (TopologySelectorLabelRequirement) A list of topology selector requirements by labels. 1.96. io.k8s.api.core.v1.TypedLocalObjectReference schema Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Schema Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 1.97. io.k8s.api.discovery.v1.EndpointSliceList schema Description EndpointSliceList represents a list of endpoint slices Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EndpointSlice) items is the list of endpoint slices kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.98. io.k8s.api.events.v1.EventList schema Description EventList is a list of Event objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.99. io.k8s.api.flowcontrol.v1.FlowSchemaList schema Description FlowSchemaList is a list of FlowSchema objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FlowSchema) items is a list of FlowSchemas. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.100. io.k8s.api.flowcontrol.v1.PriorityLevelConfigurationList schema Description PriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityLevelConfiguration) items is a list of request-priorities. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.101. io.k8s.api.networking.v1.IngressClassList schema Description IngressClassList is a collection of IngressClasses. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressClass) items is the list of IngressClasses. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.102. io.k8s.api.networking.v1.IngressList schema Description IngressList is a collection of Ingress. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) items is the list of Ingress. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.103. io.k8s.api.networking.v1.NetworkPolicyList schema Description NetworkPolicyList is a list of NetworkPolicy objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkPolicy) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.104. io.k8s.api.node.v1.RuntimeClassList schema Description RuntimeClassList is a list of RuntimeClass objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RuntimeClass) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.105. io.k8s.api.policy.v1.PodDisruptionBudgetList schema Description PodDisruptionBudgetList is a collection of PodDisruptionBudgets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodDisruptionBudget) Items is a list of PodDisruptionBudgets kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.106. io.k8s.api.rbac.v1.AggregationRule schema Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Schema Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 1.107. io.k8s.api.rbac.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.108. io.k8s.api.rbac.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.109. io.k8s.api.rbac.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.110. io.k8s.api.rbac.v1.RoleList schema Description RoleList is a collection of Roles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.111. io.k8s.api.scheduling.v1.PriorityClassList schema Description PriorityClassList is a collection of priority classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityClass) items is the list of PriorityClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.112. io.k8s.api.storage.v1.CSIDriverList schema Description CSIDriverList is a collection of CSIDriver objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIDriver) items is the list of CSIDriver kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.113. io.k8s.api.storage.v1.CSINodeList schema Description CSINodeList is a collection of CSINode objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSINode) items is the list of CSINode kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.114. io.k8s.api.storage.v1.CSIStorageCapacityList schema Description CSIStorageCapacityList is a collection of CSIStorageCapacity objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIStorageCapacity) items is the list of CSIStorageCapacity objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.115. io.k8s.api.storage.v1.StorageClassList schema Description StorageClassList is a collection of storage classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageClass) items is the list of StorageClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.116. io.k8s.api.storage.v1.VolumeAttachmentList schema Description VolumeAttachmentList is a collection of VolumeAttachment objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeAttachment) items is the list of VolumeAttachments kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.117. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionList schema Description CustomResourceDefinitionList is a list of CustomResourceDefinition objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CustomResourceDefinition) items list individual CustomResourceDefinition objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.118. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaProps schema Description JSONSchemaProps is a JSON-Schema following Specification Draft 4 ( http://json-schema.org/ ). Type object Schema Property Type Description USDref string USDschema string additionalItems `` additionalProperties `` allOf array (undefined) anyOf array (undefined) default JSON default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false. definitions object (undefined) dependencies object (undefined) description string enum array (JSON) example JSON exclusiveMaximum boolean exclusiveMinimum boolean externalDocs ExternalDocumentation format string format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: - bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})USD with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}USD - hexcolor: an hexadecimal color code like " FFFFFF: following the regex ^ ?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})USD - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. id string items `` maxItems integer maxLength integer maxProperties integer maximum number minItems integer minLength integer minProperties integer minimum number multipleOf number not `` nullable boolean oneOf array (undefined) pattern string patternProperties object (undefined) properties object (undefined) required array (string) title string type string uniqueItems boolean x-kubernetes-embedded-resource boolean x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata). x-kubernetes-int-or-string boolean x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns: 1) anyOf: - type: integer - type: string 2) allOf: - anyOf: - type: integer - type: string - ... zero or more x-kubernetes-list-map-keys array (string) x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type map by specifying the keys used as the index of the map. This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported). The properties specified must either be required or have a default value, to ensure those properties are present for all list items. x-kubernetes-list-type string x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values: 1) atomic : the list is treated as a single entity, like a scalar. Atomic lists will be entirely replaced when updated. This extension may be used on any type of list (struct, scalar, ... ). 2) set : Sets are lists that must not have multiple items with the same value. Each value must be a scalar, an object with x-kubernetes-map-type atomic or an array with x-kubernetes-list-type atomic . 3) map : These lists are like maps in that their elements have a non-index key used to identify them. Order is preserved upon merge. The map tag must only be used on a list with elements of type object. Defaults to atomic for arrays. x-kubernetes-map-type string x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values: 1) granular : These maps are actual maps (key-value pairs) and each fields are independent from each other (they can each be manipulated by separate actors). This is the default behaviour for all maps. 2) atomic : the list is treated as a single entity, like a scalar. Atomic maps will be entirely replaced when updated. x-kubernetes-preserve-unknown-fields boolean x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. x-kubernetes-validations array (ValidationRule) x-kubernetes-validations describes a list of validation rules written in the CEL expression language. This field is an alpha-level. Using this field requires the feature gate CustomResourceValidationExpressions to be enabled. 1.119. io.k8s.apimachinery.pkg.api.resource.Quantity schema Description Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: <digit> ::= 0 \| 1 \| ... \| 9 <digits> ::= <digit> \| <digit><digits> <number> ::= <digits> \| <digits>.<digits> \| <digits>. \| .<digits> <sign> ::= "+" \| "-" <signedNumber> ::= <number> \| <sign><number> <suffix> ::= <binarySI> \| <decimalExponent> \| <decimalSI> <binarySI> ::= Ki \| Mi \| Gi \| Ti \| Pi \| Ei <decimalSI> ::= m \| "" \| k \| M \| G \| T \| P \| E <decimalExponent> ::= "e" <signedNumber> \| "E" <signedNumber> No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as "1500m" - 1.5Gi will be serialized as "1536Mi" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Type string 1.120. io.k8s.apimachinery.pkg.apis.meta.v1.Condition schema Description Condition contains details for one aspect of the current state of this API Resource. Type object Required type status lastTransitionTime reason message Schema Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 1.121. io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions schema Description DeleteOptions may be provided when deleting an API object. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dryRun array (string) When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. preconditions Preconditions Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. 1.122. io.k8s.apimachinery.pkg.apis.meta.v1.GroupVersionKind schema Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group version kind Schema Property Type Description group string kind string version string 1.123. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector schema Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Schema Property Type Description matchExpressions array (LabelSelectorRequirement) matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 1.124. io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta schema Description ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}. Type object Schema Property Type Description continue string continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. remainingItemCount integer remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is estimating the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact. resourceVersion string String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. 1.125. io.k8s.apimachinery.pkg.apis.meta.v1.MicroTime schema Description MicroTime is version of Time with microsecond level precision. Type string 1.126. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 1.127. io.k8s.apimachinery.pkg.apis.meta.v1.Status schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.128. io.k8s.apimachinery.pkg.apis.meta.v1.Time schema Description Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. Type string 1.129. io.k8s.apimachinery.pkg.apis.meta.v1.WatchEvent schema Description Event represents a single event to a watched resource. Type object Required type object Schema Property Type Description object RawExtension Object is: * If Type is Added or Modified: the new state of the object. * If Type is Deleted: the state of the object immediately before deletion. * If Type is Error: *Status is recommended; other types may make sense depending on context. type string 1.130. io.k8s.apimachinery.pkg.runtime.RawExtension schema Description RawExtension is used to hold extensions in external versions. To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types. So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The step is to copy (using pkg/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.) Type object 1.131. io.k8s.apimachinery.pkg.util.intstr.IntOrString schema Description IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number. Type string 1.132. io.k8s.kube-aggregator.pkg.apis.apiregistration.v1.APIServiceList schema Description APIServiceList is a list of APIService objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIService) Items is the list of APIService kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.133. io.k8s.migration.v1alpha1.StorageStateList schema Description StorageStateList is a list of StorageState Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageState) List of storagestates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.134. io.k8s.migration.v1alpha1.StorageVersionMigrationList schema Description StorageVersionMigrationList is a list of StorageVersionMigration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageVersionMigration) List of storageversionmigrations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.135. io.k8s.networking.policy.v1alpha1.AdminNetworkPolicyList schema Description AdminNetworkPolicyList is a list of AdminNetworkPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AdminNetworkPolicy) List of adminnetworkpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.136. io.k8s.networking.policy.v1alpha1.BaselineAdminNetworkPolicyList schema Description BaselineAdminNetworkPolicyList is a list of BaselineAdminNetworkPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BaselineAdminNetworkPolicy) List of baselineadminnetworkpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.137. io.k8s.storage.snapshot.v1.VolumeSnapshotClassList schema Description VolumeSnapshotClassList is a list of VolumeSnapshotClass Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotClass) List of volumesnapshotclasses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.138. io.k8s.storage.snapshot.v1.VolumeSnapshotContentList schema Description VolumeSnapshotContentList is a list of VolumeSnapshotContent Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotContent) List of volumesnapshotcontents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.139. io.k8s.storage.snapshot.v1.VolumeSnapshotList schema Description VolumeSnapshotList is a list of VolumeSnapshot Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshot) List of volumesnapshots. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.140. io.metal3.v1alpha1.BareMetalHostList schema Description BareMetalHostList is a list of BareMetalHost Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BareMetalHost) List of baremetalhosts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.141. io.metal3.v1alpha1.BMCEventSubscriptionList schema Description BMCEventSubscriptionList is a list of BMCEventSubscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BMCEventSubscription) List of bmceventsubscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.142. io.metal3.v1alpha1.DataImageList schema Description DataImageList is a list of DataImage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DataImage) List of dataimages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.143. io.metal3.v1alpha1.FirmwareSchemaList schema Description FirmwareSchemaList is a list of FirmwareSchema Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FirmwareSchema) List of firmwareschemas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.144. io.metal3.v1alpha1.HardwareDataList schema Description HardwareDataList is a list of HardwareData Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HardwareData) List of hardwaredata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.145. io.metal3.v1alpha1.HostFirmwareComponentsList schema Description HostFirmwareComponentsList is a list of HostFirmwareComponents Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HostFirmwareComponents) List of hostfirmwarecomponents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.146. io.metal3.v1alpha1.HostFirmwareSettingsList schema Description HostFirmwareSettingsList is a list of HostFirmwareSettings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HostFirmwareSettings) List of hostfirmwaresettings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.147. io.metal3.v1alpha1.PreprovisioningImageList schema Description PreprovisioningImageList is a list of PreprovisioningImage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PreprovisioningImage) List of preprovisioningimages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.148. io.metal3.v1alpha1.ProvisioningList schema Description ProvisioningList is a list of Provisioning Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Provisioning) List of provisionings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.149. io.openshift.apiserver.v1.APIRequestCountList schema Description APIRequestCountList is a list of APIRequestCount Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIRequestCount) List of apirequestcounts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.150. io.openshift.authorization.v1.RoleBindingRestrictionList schema Description RoleBindingRestrictionList is a list of RoleBindingRestriction Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBindingRestriction) List of rolebindingrestrictions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.151. io.openshift.autoscaling.v1.ClusterAutoscalerList schema Description ClusterAutoscalerList is a list of ClusterAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterAutoscaler) List of clusterautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.152. io.openshift.autoscaling.v1beta1.MachineAutoscalerList schema Description MachineAutoscalerList is a list of MachineAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineAutoscaler) List of machineautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.153. io.openshift.cloudcredential.v1.CredentialsRequestList schema Description CredentialsRequestList is a list of CredentialsRequest Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CredentialsRequest) List of credentialsrequests. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.154. io.openshift.config.v1.APIServerList schema Description APIServerList is a list of APIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIServer) List of apiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.155. io.openshift.config.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.156. io.openshift.config.v1.BuildList schema Description BuildList is a list of Build Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) List of builds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.157. io.openshift.config.v1.ClusterOperatorList schema Description ClusterOperatorList is a list of ClusterOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterOperator) List of clusteroperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.158. io.openshift.config.v1.ClusterVersionList schema Description ClusterVersionList is a list of ClusterVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterVersion) List of clusterversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.159. io.openshift.config.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.160. io.openshift.config.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.161. io.openshift.config.v1.FeatureGateList schema Description FeatureGateList is a list of FeatureGate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FeatureGate) List of featuregates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.162. io.openshift.config.v1.ImageContentPolicyList schema Description ImageContentPolicyList is a list of ImageContentPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentPolicy) List of imagecontentpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.163. io.openshift.config.v1.ImageDigestMirrorSetList schema Description ImageDigestMirrorSetList is a list of ImageDigestMirrorSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageDigestMirrorSet) List of imagedigestmirrorsets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.164. io.openshift.config.v1.ImageList schema Description ImageList is a list of Image Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) List of images. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.165. io.openshift.config.v1.ImageTagMirrorSetList schema Description ImageTagMirrorSetList is a list of ImageTagMirrorSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTagMirrorSet) List of imagetagmirrorsets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.166. io.openshift.config.v1.InfrastructureList schema Description InfrastructureList is a list of Infrastructure Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Infrastructure) List of infrastructures. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.167. io.openshift.config.v1.IngressList schema Description IngressList is a list of Ingress Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) List of ingresses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.168. io.openshift.config.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.169. io.openshift.config.v1.NodeList schema Description NodeList is a list of Node Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.170. io.openshift.config.v1.OAuthList schema Description OAuthList is a list of OAuth Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuth) List of oauths. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.171. io.openshift.config.v1.OperatorHubList schema Description OperatorHubList is a list of OperatorHub Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorHub) List of operatorhubs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.172. io.openshift.config.v1.ProjectList schema Description ProjectList is a list of Project Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) List of projects. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.173. io.openshift.config.v1.ProxyList schema Description ProxyList is a list of Proxy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Proxy) List of proxies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.174. io.openshift.config.v1.SchedulerList schema Description SchedulerList is a list of Scheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Scheduler) List of schedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.175. io.openshift.console.v1.ConsoleCLIDownloadList schema Description ConsoleCLIDownloadList is a list of ConsoleCLIDownload Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleCLIDownload) List of consoleclidownloads. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.176. io.openshift.console.v1.ConsoleExternalLogLinkList schema Description ConsoleExternalLogLinkList is a list of ConsoleExternalLogLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleExternalLogLink) List of consoleexternalloglinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.177. io.openshift.console.v1.ConsoleLinkList schema Description ConsoleLinkList is a list of ConsoleLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleLink) List of consolelinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.178. io.openshift.console.v1.ConsoleNotificationList schema Description ConsoleNotificationList is a list of ConsoleNotification Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleNotification) List of consolenotifications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.179. io.openshift.console.v1.ConsolePluginList schema Description ConsolePluginList is a list of ConsolePlugin Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsolePlugin) List of consoleplugins. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.180. io.openshift.console.v1.ConsoleQuickStartList schema Description ConsoleQuickStartList is a list of ConsoleQuickStart Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleQuickStart) List of consolequickstarts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.181. io.openshift.console.v1.ConsoleSampleList schema Description ConsoleSampleList is a list of ConsoleSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleSample) List of consolesamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.182. io.openshift.console.v1.ConsoleYAMLSampleList schema Description ConsoleYAMLSampleList is a list of ConsoleYAMLSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleYAMLSample) List of consoleyamlsamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.183. io.openshift.helm.v1beta1.HelmChartRepositoryList schema Description HelmChartRepositoryList is a list of HelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HelmChartRepository) List of helmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.184. io.openshift.helm.v1beta1.ProjectHelmChartRepositoryList schema Description ProjectHelmChartRepositoryList is a list of ProjectHelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ProjectHelmChartRepository) List of projecthelmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.185. io.openshift.machine.v1.ControlPlaneMachineSetList schema Description ControlPlaneMachineSetList is a list of ControlPlaneMachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControlPlaneMachineSet) List of controlplanemachinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.186. io.openshift.machine.v1beta1.MachineHealthCheckList schema Description MachineHealthCheckList is a list of MachineHealthCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineHealthCheck) List of machinehealthchecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.187. io.openshift.machine.v1beta1.MachineList schema Description MachineList is a list of Machine Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Machine) List of machines. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.188. io.openshift.machine.v1beta1.MachineSetList schema Description MachineSetList is a list of MachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineSet) List of machinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.189. io.openshift.machineconfiguration.v1.ContainerRuntimeConfigList schema Description ContainerRuntimeConfigList is a list of ContainerRuntimeConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ContainerRuntimeConfig) List of containerruntimeconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.190. io.openshift.machineconfiguration.v1.ControllerConfigList schema Description ControllerConfigList is a list of ControllerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerConfig) List of controllerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.191. io.openshift.machineconfiguration.v1.KubeletConfigList schema Description KubeletConfigList is a list of KubeletConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeletConfig) List of kubeletconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.192. io.openshift.machineconfiguration.v1.MachineConfigList schema Description MachineConfigList is a list of MachineConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfig) List of machineconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.193. io.openshift.machineconfiguration.v1.MachineConfigPoolList schema Description MachineConfigPoolList is a list of MachineConfigPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfigPool) List of machineconfigpools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.194. io.openshift.monitoring.v1.AlertingRuleList schema Description AlertingRuleList is a list of AlertingRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertingRule) List of alertingrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.195. io.openshift.monitoring.v1.AlertRelabelConfigList schema Description AlertRelabelConfigList is a list of AlertRelabelConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertRelabelConfig) List of alertrelabelconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.196. io.openshift.network.cloud.v1.CloudPrivateIPConfigList schema Description CloudPrivateIPConfigList is a list of CloudPrivateIPConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudPrivateIPConfig) List of cloudprivateipconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.197. io.openshift.operator.controlplane.v1alpha1.PodNetworkConnectivityCheckList schema Description PodNetworkConnectivityCheckList is a list of PodNetworkConnectivityCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodNetworkConnectivityCheck) List of podnetworkconnectivitychecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.198. io.openshift.operator.imageregistry.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.199. io.openshift.operator.imageregistry.v1.ImagePrunerList schema Description ImagePrunerList is a list of ImagePruner Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImagePruner) List of imagepruners. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.200. io.openshift.operator.ingress.v1.DNSRecordList schema Description DNSRecordList is a list of DNSRecord Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNSRecord) List of dnsrecords. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.201. io.openshift.operator.network.v1.EgressRouterList schema Description EgressRouterList is a list of EgressRouter Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressRouter) List of egressrouters. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.202. io.openshift.operator.network.v1.OperatorPKIList schema Description OperatorPKIList is a list of OperatorPKI Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorPKI) List of operatorpkis. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.203. io.openshift.operator.samples.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.204. io.openshift.operator.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.205. io.openshift.operator.v1.CloudCredentialList schema Description CloudCredentialList is a list of CloudCredential Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudCredential) List of cloudcredentials. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.206. io.openshift.operator.v1.ClusterCSIDriverList schema Description ClusterCSIDriverList is a list of ClusterCSIDriver Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterCSIDriver) List of clustercsidrivers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.207. io.openshift.operator.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.208. io.openshift.operator.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.209. io.openshift.operator.v1.CSISnapshotControllerList schema Description CSISnapshotControllerList is a list of CSISnapshotController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSISnapshotController) List of csisnapshotcontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.210. io.openshift.operator.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.211. io.openshift.operator.v1.EtcdList schema Description EtcdList is a list of Etcd Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Etcd) List of etcds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.212. io.openshift.operator.v1.IngressControllerList schema Description IngressControllerList is a list of IngressController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressController) List of ingresscontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.213. io.openshift.operator.v1.InsightsOperatorList schema Description InsightsOperatorList is a list of InsightsOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InsightsOperator) List of insightsoperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.214. io.openshift.operator.v1.KubeAPIServerList schema Description KubeAPIServerList is a list of KubeAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeAPIServer) List of kubeapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.215. io.openshift.operator.v1.KubeControllerManagerList schema Description KubeControllerManagerList is a list of KubeControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeControllerManager) List of kubecontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.216. io.openshift.operator.v1.KubeSchedulerList schema Description KubeSchedulerList is a list of KubeScheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeScheduler) List of kubeschedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.217. io.openshift.operator.v1.KubeStorageVersionMigratorList schema Description KubeStorageVersionMigratorList is a list of KubeStorageVersionMigrator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeStorageVersionMigrator) List of kubestorageversionmigrators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.218. io.openshift.operator.v1.MachineConfigurationList schema Description MachineConfigurationList is a list of MachineConfiguration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfiguration) List of machineconfigurations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.219. io.openshift.operator.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.220. io.openshift.operator.v1.OpenShiftAPIServerList schema Description OpenShiftAPIServerList is a list of OpenShiftAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftAPIServer) List of openshiftapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.221. io.openshift.operator.v1.OpenShiftControllerManagerList schema Description OpenShiftControllerManagerList is a list of OpenShiftControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftControllerManager) List of openshiftcontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.222. io.openshift.operator.v1.ServiceCAList schema Description ServiceCAList is a list of ServiceCA Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceCA) List of servicecas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.223. io.openshift.operator.v1.StorageList schema Description StorageList is a list of Storage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Storage) List of storages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.224. io.openshift.operator.v1alpha1.ImageContentSourcePolicyList schema Description ImageContentSourcePolicyList is a list of ImageContentSourcePolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentSourcePolicy) List of imagecontentsourcepolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.225. io.openshift.performance.v2.PerformanceProfileList schema Description PerformanceProfileList is a list of PerformanceProfile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PerformanceProfile) List of performanceprofiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.226. io.openshift.quota.v1.ClusterResourceQuotaList schema Description ClusterResourceQuotaList is a list of ClusterResourceQuota Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterResourceQuota) List of clusterresourcequotas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.227. io.openshift.security.v1.SecurityContextConstraintsList schema Description SecurityContextConstraintsList is a list of SecurityContextConstraints Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (SecurityContextConstraints) List of securitycontextconstraints. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.228. io.openshift.tuned.v1.ProfileList schema Description ProfileList is a list of Profile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Profile) List of profiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.229. io.openshift.tuned.v1.TunedList schema Description TunedList is a list of Tuned Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Tuned) List of tuneds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.230. io.x-k8s.cluster.infrastructure.v1beta1.Metal3RemediationList schema Description Metal3RemediationList is a list of Metal3Remediation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Metal3Remediation) List of metal3remediations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.231. io.x-k8s.cluster.infrastructure.v1beta1.Metal3RemediationTemplateList schema Description Metal3RemediationTemplateList is a list of Metal3RemediationTemplate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Metal3RemediationTemplate) List of metal3remediationtemplates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.232. io.x-k8s.cluster.ipam.v1beta1.IPAddressClaimList schema Description IPAddressClaimList is a list of IPAddressClaim Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPAddressClaim) List of ipaddressclaims. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.233. io.x-k8s.cluster.ipam.v1beta1.IPAddressList schema Description IPAddressList is a list of IPAddress Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPAddress) List of ipaddresses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.234. org.ovn.k8s.v1.AdminPolicyBasedExternalRouteList schema Description AdminPolicyBasedExternalRouteList is a list of AdminPolicyBasedExternalRoute Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AdminPolicyBasedExternalRoute) List of adminpolicybasedexternalroutes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.235. org.ovn.k8s.v1.EgressFirewallList schema Description EgressFirewallList is a list of EgressFirewall Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressFirewall) List of egressfirewalls. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.236. org.ovn.k8s.v1.EgressIPList schema Description EgressIPList is a list of EgressIP Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressIP) List of egressips. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.237. org.ovn.k8s.v1.EgressQoSList schema Description EgressQoSList is a list of EgressQoS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressQoS) List of egressqoses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.238. org.ovn.k8s.v1.EgressServiceList schema Description EgressServiceList is a list of EgressService Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressService) List of egressservices. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
[ "<quantity> ::= <signedNumber><suffix>", "(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)", "(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)", "(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)", "type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }", "type PluginA struct { AOption string `json:\"aOption\"` }", "type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }", "type PluginA struct { AOption string `json:\"aOption\"` }", "{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/common_object_reference/api-object-reference
B.2. How to Set Up Red Hat Virtualization Manager to Use Ethtool
B.2. How to Set Up Red Hat Virtualization Manager to Use Ethtool You can configure ethtool properties for host network interface cards from the Administration Portal. The ethtool_opts key is not available by default and needs to be added to the Manager using the engine configuration tool. You also need to install the required VDSM hook package on the hosts. Adding the ethtool_opts Key to the Manager On the Manager, run the following command to add the key: # engine-config -s UserDefinedNetworkCustomProperties=ethtool_opts=.* --cver=4.4 Restart the ovirt-engine service: # systemctl restart ovirt-engine.service On the hosts that you want to configure ethtool properties, install the VDSM hook package. The package is available by default on Red Hat Virtualization Host but needs to be installed on Red Hat Enterprise Linux hosts. # dnf install vdsm-hook-ethtool-options The ethtool_opts key is now available in the Administration Portal. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts to apply ethtool properties to logical networks.
[ "engine-config -s UserDefinedNetworkCustomProperties=ethtool_opts=.* --cver=4.4", "systemctl restart ovirt-engine.service", "dnf install vdsm-hook-ethtool-options" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/how_to_set_up_red_hat_enterprise_virtualization_manager_to_use_ethtool
13.2. Using the Infinispan CDI Module
13.2. Using the Infinispan CDI Module The Infinispan CDI module can be used for the following purposes: To configure and inject Infinispan caches into CDI Beans and Java EE components. To configure cache managers. To control storage and retrieval using CDI annotations. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug 13.2.1. Configure and Inject Infinispan Caches 13.2.1.1. Inject an Infinispan Cache An Infinispan cache is one of the multiple components that can be injected into the project's CDI beans. The following code snippet illustrates how to inject a cache instance into the CDI bean: Report a bug 13.2.1.2. Inject a Remote Infinispan Cache The code snippet to inject a normal cache is slightly modified to inject a remote Infinispan cache, as follows: Report a bug 13.2.1.3. Set the Injection's Target Cache The following are the three steps to set an injection's target cache: Create a qualifier annotation. Add a producer class. Inject the desired class. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug 13.2.1.3.1. Create a Qualifier Annotation To use CDI to return a specific cache, create custom cache qualifier annotations as follows: Example 13.1. Custom Cache Qualifier Use the created @SmallCache qualifier to specify how to create specific caches. Report a bug 13.2.1.3.2. Add a Producer Class The following code snippet illustrates how the @SmallCache qualifier (created in the step) specifies a way to create a cache: Example 13.2. Using the @SmallCache Qualifier The elements in the code snippet are: @ConfigureCache specifies the name of the cache. @SmallCache is the cache qualifier. Report a bug 13.2.1.3.3. Inject the Desired Class Use the @SmallCache qualifier and the new producer class to inject a specific cache into the CDI bean as follows: Report a bug 13.2.2. Configure Cache Managers with CDI A Red Hat JBoss Data Grid Cache Manager (both embedded and remote) can be configured using CDI. Whether configuring an embedded or remote cache manager, the first step is to specify a default configuration that is annotated to act as a producer. Report a bug 13.2.2.1. Specify the Default Configuration Specify a method annotated as a producer for the Red Hat JBoss Data Grid configuration object to replace the default Infinispan Configuration. The following sample configuration illustrates this step: Example 13.3. Specifying the Default Configuration Note CDI adds a @Default qualifier if no other qualifiers are provided. If a @Produces annotation is placed in a method that returns a Configuration instance, the method is invoked when a Configuration object is required. In the provided example configuration, the method creates a new Configuration object which is subsequently configured and returned. Report a bug 13.2.2.2. Override the Creation of the Embedded Cache Manager Prerequisites Section 13.2.2.1, "Specify the Default Configuration" Creating Non Clustered Caches After a producer method is annotated, this method will be called when creating an EmbeddedCacheManager , as follows: Example 13.4. Create a Non Clustered Cache The @ApplicationScoped annotation specifies that the method is only called once. Creating Clustered Caches The following configuration can be used to create an EmbeddedCacheManager that can create clustered caches. Example 13.5. Create Clustered Caches Invoke the Method to Generate an EmbeddedCacheManager The method annotated with @Produces in the non clustered method generates Configuration objects. The methods in the clustered cache example annonated with @Produces generate EmbeddedCacheManager objects. Add an injection as follows in your CDI Bean to invoke the appropriate annotated method. This generates EmbeddedCacheManager and injects it into the code at runtime. Example 13.6. Generate an EmbeddedCacheManager Report a bug 13.2.2.3. Configure a Remote Cache Manager The RemoteCacheManager is configured in a manner similar to EmbeddedCacheManagers , as follows: Example 13.7. Configuring the Remote Cache Manager Report a bug 13.2.2.4. Configure Multiple Cache Managers with a Single Class A single class can be used to configure multiple cache managers and remote cache managers based on the created qualifiers. An example of this is as follows: Example 13.8. Configure Multiple Cache Managers Report a bug 13.2.3. Storage and Retrieval Using CDI Annotations 13.2.3.1. Configure Cache Annotations Specific CDI annotations are accepted for the JCache (JSR-107) specification. All included annotations are located in the javax.cache package. The annotations intercept method calls on CDI beans and perform storage and retrieval tasks as a result of these interceptions. Important CDI is supported in both Remote Client-Server Mode and Library Mode; however, annotations such as @CachePut, @CacheRemove, @CacheRemoveAll, and @CacheResult cannot be used in Remote Client-Server Mode. Report a bug 13.2.3.2. Enable Cache Annotations Interceptors can be added to the CDI bean archive using the beans.xml file. Adding the following code adds interceptors such as the CacheResultInterceptor , CachePutInterceptor , CacheRemoveEntryInterceptor and the CacheRemoveAllInterceptor : Example 13.9. Adding Interceptors Note The listed interceptors must appear in the beans.xml file for Red Hat JBoss Data Grid to use javax.cache annotations. Report a bug 13.2.3.3. Caching the Result of a Method Invocation A common practice for time or resource intensive operations is to save the results in a cache for future access. The following code is an example of such an operation: A common approach is to cache the results of this method call and to check the cache when the result is required. The following is an example of a code snippet that looks up the result of such an operation in a cache. If the results are not found, the code snippet runs the toCelsiusFormatted method again and stores the result in the cache. In such cases, the Infinispan CDI module can be used to eliminate all the extra code in the related examples. Annotate the method with the @CacheResult annotation instead, as follows: Due to the annotation, Infinispan checks the cache and if the results are not found, it invokes the toCelsiusFormatted() method call. Note The Infinispan CDI module allows checking the cache for saved results, but this approach should be carefully considered before application. If the results of the call should always be fresh data, or if the cache reading requires a remote network lookup or deserialization from a cache loader, checking the cache before call method invocation can be counter productive. Report a bug 13.2.3.3.1. Specify the Cache Used Add the following optional attribute ( cacheName ) to the @CacheResult annotation to specify the cache to check for results of the method call: Report a bug 13.2.3.3.2. Cache Keys for Cached Results As a default, the @CacheResult annotation creates a key for the results fetched from a cache. The key consists of a combination of all parameters in the relevant method. Create a custom key using the @CacheKey annotation as follows: Example 13.10. Create a Custom Key In the specified example, only the values of p1 and p2 are used to create the cache key. The value of dontCare is not used when determining the cache key. Report a bug 13.2.3.3.3. Generate a Custom Key Generate a custom key as follows: The listed method constructs a custom key. This key is passed as part of the value generated by the first parameter of the invocation context. To specify the custom key generation scheme, add the optional parameter cacheKeyGenerator to the @CacheResult annotation as follows: Using the provided method, p1 contains the custom key. Report a bug 13.2.4. Cache Operations 13.2.4.1. Update a Cache Entry When the method that contains the @CachePut annotation is invoked, a parameter (normally passed to the method annotated with @CacheValue ) is stored in the cache. Example 13.11. Sample @CachePut Annotated Method Further customization is possible using cacheName and cacheKeyGenerator in the @CachePut method. Additionally, some parameters in the invoked method may be annotated with @CacheKey to control key generation. See Also: Section 13.2.3.3.2, "Cache Keys for Cached Results" Report a bug 13.2.4.2. Remove an Entry from the Cache The following is an example of a @CacheRemoveEntry annotated method that is used to remove an entry from the cache: Example 13.12. Removing an Entry from the Cache The annotation accepts the optional cacheName and cacheKeyGenerator attributes. Report a bug 13.2.4.3. Clear the Cache Invoke the @CacheRemoveAll method to clear all entries from the cache. Example 13.13. Clear All Entries from the Cache with @CacheRemoveAll As displayed in the example, this annotation accepts an optional cacheName attribute. Report a bug
[ "public class MyCDIBean { @Inject Cache<String, String> cache; }", "public class MyCDIBean { @Inject RemoteCache<String, String> remoteCache; }", "@javax.inject.Qualifier @Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface SmallCache {}", "import org.infinispan.configuration.cache.Configuration; import org.infinispan.configuration.cache.ConfigurationBuilder; import org.infinispan.cdi.ConfigureCache; import javax.enterprise.inject.Produces; public class CacheCreator { @ConfigureCache(\"smallcache\") @SmallCache @Produces public Configuration specialCacheCfg() { return new ConfigurationBuilder() .eviction() .strategy(EvictionStrategy.LRU) .maxEntries(10) .build(); } }", "public class MyCDIBean { @Inject @SmallCache Cache<String, String> mySmallCache; }", "public class Config { @Produces public Configuration defaultEmbeddedConfiguration () { return new ConfigurationBuilder() .eviction() .strategy(EvictionStrategy.LRU) .maxEntries(100) .build(); } }", "public class Config { @Produces @ApplicationScoped public EmbeddedCacheManager defaultEmbeddedCacheManager() { Configuration cfg = new ConfigurationBuilder() .eviction() .strategy(EvictionStrategy.LRU) .maxEntries(150) .build(); return new DefaultCacheManager(cfg); } }", "public class Config { @Produces @ApplicationScoped public EmbeddedCacheManager defaultClusteredCacheManager() { GlobalConfiguration g = new GlobalConfigurationBuilder() .clusteredDefault() .transport() .clusterName(\"InfinispanCluster\") .build(); Configuration cfg = new ConfigurationBuilder() .eviction() .strategy(EvictionStrategy.LRU) .maxEntries(150) .build(); return new DefaultCacheManager(g, cfg); } }", "@Inject EmbeddedCacheManager cacheManager;", "public class Config { @Produces @ApplicationScoped public RemoteCacheManager defaultRemoteCacheManager() { Configuration conf = new ConfigurationBuilder().addServer().host(ADDRESS).port(PORT).build(); return new RemoteCacheManager(conf); } }}", "public class Config { @Produces @ApplicationScoped public org.infinispan.manager.EmbeddedCacheManager defaultEmbeddedCacheManager() { Configuration cfg = new ConfigurationBuilder() .eviction() .strategy(EvictionStrategy.LRU) .maxEntries(150) .build(); return new DefaultCacheManager(cfg); } @Produces @ApplicationScoped @DefaultClustered public org.infinispan.manager.EmbeddedCacheManager defaultClusteredCacheManager() { GlobalConfiguration g = new GlobalConfigurationBuilder() .clusteredDefault() .transport() .clusterName(\"InfinispanCluster\") .build(); Configuration cfg = new ConfigurationBuilder() .eviction() .strategy(EvictionStrategy.LRU) .maxEntries(150) .build(); return new DefaultCacheManager(g, cfg); } @Produces @ApplicationScoped @DefaultRemote public RemoteCacheManager defaultRemoteCacheManager() { org.infinispan.client.hotrod.configuration.Configuration conf = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder().addServer().host(ADDRESS).port(PORT).build(); return new RemoteCacheManager(conf); } @Produces @ApplicationScoped @RemoteCacheInDifferentDataCentre public RemoteCacheManager newRemoteCacheManager() { org.infinispan.client.hotrod.configuration.Configuration confid = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder().addServer().host(ADDRESS_FAR_AWAY).port(PORT).build(); return new RemoteCacheManager(confid); } }", "<beans xmlns=\"http://java.sun.som/xml/ns/javaee\" xmlns:xsi=\"http://www/w3/org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/beans_1_0.xsd\" > <interceptors> <class> org.infinispan.cdi.interceptor.CacheResultInterceptor </class> <class> org.infinispan.cdi.interceptor.CachePutInterceptor </class> <class> org.infinispan.cdi.interceptor.CacheRemoveEntryInterceptor </class> <class> org.infinispan.cdi.interceptor.CacheRemoveAllInterceptor </class> </interceptors> </beans>", "public String toCelsiusFormatted(float fahrenheit) { return NumberFormat.getInstance() .format((fahrenheit * 5 / 9) - 32) + \" degrees Celsius\"; }", "float f = getTemperatureInFahrenheit(); Cache<Float, String> fahrenheitToCelsiusCache = getCache(); String celsius = fahrenheitToCelsiusCache = get(f); if (celsius == null) { celsius = toCelsiusFormatted(f); fahrenheitToCelsiusCache.put(f, celsius); }", "@javax.cache.annotation.CacheResult public String toCelsiusFormatted(float fahrenheit) { return NumberFormat.getInstance() .format((fahrenheit * 5 / 9) - 32) + \" degrees Celsius\"; }", "@CacheResult(cacheName = \"mySpecialCache\") public String doSomething(String parameter) { <!-- Additional configuration information here --> }", "@CacheResult public String doSomething (@CacheKey String p1, @CacheKey String p2, String dontCare) { <!-- Additional configuration information here --> }", "import javax.cache.annotation.CacheKey; import javax.cache.annotation.CacheKeyGenerator; import javax.cache.annotation.CacheKeyInvocationContext; import java.lang.annotation.Annotation; public class MyCacheKeyGenerator implements CacheKeyGenerator { @Override public CacheKey generateCacheKey(CacheKeyInvocationContext<? extends Annotation> ctx) { return new MyCacheKey( ctx.getAllParameters()[0].getValue() ); } }", "@CacheResult(cacheKeyGenerator = MyCacheKeyGenerator.class) public void doSomething(String p1, String p2) { <!-- Additional configuration information here --> }", "import javax.cache.annotation.CachePut; import javax.cache.annotation.CacheKey; import javax.cache.annotation.CacheValue; @CachePut (cacheName = \"personCache\") public void updatePerson (@CacheKey long personId, @CacheValue Person newPerson) { <!-- Additional configuration information here --> }", "import javax.cache.annotation.CacheRemoveEntry; import javax.cache.annotation.CacheKey; @CacheRemoveEntry (cacheName = \"cacheOfPeople\") public void changePersonName (@CacheKey long personId, string newName { <!-- Additional configuration information here --> }", "import javax.cache.annotation.CacheRemoveAll; @CacheRemoveAll (cacheName = \"statisticsCache\") public void resetStatistics() { <!-- Additional configuration information here --> }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-Using_the_Infinispan_CDI_Module
Chapter 100. Openapi Java
Chapter 100. Openapi Java The Rest DSL can be integrated with the camel-openapi-java module which is used for exposing the REST services and their APIs using OpenApi . The camel-openapi-java module can be used from the REST components (without the need for servlet). 100.1. Dependencies When using openapi-java with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-openapi-java-starter</artifactId> </dependency> 100.2. Using OpenApi in rest-dsl You can enable the OpenApi api from the rest-dsl by configuring the apiContextPath dsl as shown below: public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component("netty-http").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty("prettyPrint", "true") // setup context path and port number that netty will use .contextPath("/").port(8080) // add OpenApi api-doc out of the box .apiContextPath("/api-doc") .apiProperty("api.title", "User API").apiProperty("api.version", "1.2.3") // and enable CORS .apiProperty("cors", "true"); // this user REST service is json only rest("/user").description("User rest service") .consumes("application/json").produces("application/json") .get("/{id}").description("Find user by id").outType(User.class) .param().name("id").type(path).description("The id of the user to get").dataType("int").endParam() .to("bean:userService?method=getUser(USD{header.id})") .put().description("Updates or create a user").type(User.class) .param().name("body").type(body).description("The user to update or create").endParam() .to("bean:userService?method=updateUser") .get("/findAll").description("Find all users").outType(User[].class) .to("bean:userService?method=listUsers"); } } 100.3. Options The OpenApi module can be configured using the following options. To configure using a servlet you use the init-param as shown above. When configuring directly in the rest-dsl, you use the appropriate method, such as enableCORS , host,contextPath , dsl. The options with api.xxx is configured using apiProperty dsl. Option Type Description cors Boolean Whether to enable CORS. Notice this only enables CORS for the api browser, and not the actual access to the REST services. Is default false. openapi.version String OpenApi spec version. Is default 3.0. host String To setup the hostname. If not configured camel-openapi-java will calculate the name as localhost based. schemes String The protocol schemes to use. Multiple values can be separated by comma such as "http,https". The default value is "http". base.path String Required : To setup the base path where the REST services is available. The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/base.path api.path String To setup the path where the API is available (eg /api-docs). The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/api.path So using relative paths is much easier. See above for an example. api.version String The version of the api. Is default 0.0.0. api.title String The title of the application. api.description String A short description of the application. api.termsOfService String A URL to the Terms of Service of the API. api.contact.name String Name of person or organization to contact api.contact.email String An email to be used for API-related correspondence. api.contact.url String A URL to a website for more contact information. api.license.name String The license name used for the API. api.license.url String A URL to the license used for the API. 100.4. Adding Security Definitions in API doc The Rest DSL now supports declaring OpenApi securityDefinitions in the generated API document. For example as shown below: rest("/user").tag("dude").description("User rest service") // setup security definitions .securityDefinitions() .oauth2("petstore_auth").authorizationUrl("http://petstore.swagger.io/oauth/dialog").end() .apiKey("api_key").withHeader("myHeader").end() .end() .consumes("application/json").produces("application/json") Here we have setup two security definitions OAuth2 - with implicit authorization with the provided url Api Key - using an api key that comes from HTTP header named myHeader Then you need to specify on the rest operations which security to use by referring to their key (petstore_auth or api_key). .get("/{id}/{date}").description("Find user by id and date").outType(User.class) .security("api_key") ... .put().description("Updates or create a user").type(User.class) .security("petstore_auth", "write:pets,read:pets") Here the get operation is using the Api Key security and the put operation is using OAuth security with permitted scopes of read and write pets. 100.5. JSon or Yaml The camel-openapi-java module supports both JSon and Yaml out of the box. You can specify in the request url what you want returned by using /openapi.json or /openapi.yaml for either one. If none is specified then the HTTP Accept header is used to detect if json or yaml can be accepted. If either both is accepted or none was set as accepted then json is returned as the default format. 100.6. useXForwardHeaders and API URL resolution The OpenApi specification allows you to specify the host, port & path that is serving the API. In OpenApi V2 this is done via the host field and in OpenAPI V3 it is part of the servers field. By default, the value for these fields is determined by X-Forwarded headers, X-Forwarded-Host & X-Forwarded-Proto . This can be overridden by disabling the lookup of X-Forwarded headers and by specifying your own host, port & scheme on the REST configuration. restConfiguration().component("netty-http") .useXForwardHeaders(false) .apiProperty("schemes", "https"); .host("localhost") .port(8080); 100.7. Examples In the Apache Camel distribution we ship the camel-example-openapi-cdi and camel-example-spring-boot-rest-openapi-simple which demonstrates using this OpenApi component. 100.8. Spring Boot Auto-Configuration The component supports 1 options, which are listed below. Name Description Default Type camel.openapi.enabled Enables Camel Rest DSL to automatic register its OpenAPI (eg swagger doc) in Spring Boot which allows tooling such as SpringDoc to integrate with Camel. true Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-openapi-java-starter</artifactId> </dependency>", "public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component(\"netty-http\").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty(\"prettyPrint\", \"true\") // setup context path and port number that netty will use .contextPath(\"/\").port(8080) // add OpenApi api-doc out of the box .apiContextPath(\"/api-doc\") .apiProperty(\"api.title\", \"User API\").apiProperty(\"api.version\", \"1.2.3\") // and enable CORS .apiProperty(\"cors\", \"true\"); // this user REST service is json only rest(\"/user\").description(\"User rest service\") .consumes(\"application/json\").produces(\"application/json\") .get(\"/{id}\").description(\"Find user by id\").outType(User.class) .param().name(\"id\").type(path).description(\"The id of the user to get\").dataType(\"int\").endParam() .to(\"bean:userService?method=getUser(USD{header.id})\") .put().description(\"Updates or create a user\").type(User.class) .param().name(\"body\").type(body).description(\"The user to update or create\").endParam() .to(\"bean:userService?method=updateUser\") .get(\"/findAll\").description(\"Find all users\").outType(User[].class) .to(\"bean:userService?method=listUsers\"); } }", "rest(\"/user\").tag(\"dude\").description(\"User rest service\") // setup security definitions .securityDefinitions() .oauth2(\"petstore_auth\").authorizationUrl(\"http://petstore.swagger.io/oauth/dialog\").end() .apiKey(\"api_key\").withHeader(\"myHeader\").end() .end() .consumes(\"application/json\").produces(\"application/json\")", ".get(\"/{id}/{date}\").description(\"Find user by id and date\").outType(User.class) .security(\"api_key\") .put().description(\"Updates or create a user\").type(User.class) .security(\"petstore_auth\", \"write:pets,read:pets\")", "restConfiguration().component(\"netty-http\") .useXForwardHeaders(false) .apiProperty(\"schemes\", \"https\"); .host(\"localhost\") .port(8080);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-openapi-java-starter
15.4. Configuration Examples
15.4. Configuration Examples 15.4.1. Labeling Gluster Bricks A Gluster brick is an export directory on a server in the trusted storage pool. In case that the brick is not labeled with the correct SELinux context, glusterd_brick_t , SELinux denies certain file access operations and generates various AVC messages. The following procedure shows how to label Gluster bricks with the correct SELinux context. The procedure assumes that you previously created and formatted a logical volume, for example /dev/rhgs/gluster , to be used as the Gluster brick. For detailed information about Gluster bricks, see the Red Hat Gluster Storage Volumes chapter in the Administration Guide for Red Hat Gluster Storage. Procedure 15.1. How to Label a Gluster Brick Create a directory to mount the previously formatted logical volume. For example: Mount the logical volume, in this case /dev/vg-group/gluster , to the /mnt/brick1/ directory created in the step. Note that the mount command mounts devices only temporarily. To mount the device permanently, add an entry similar as the following one to the /etc/fstab file: For more information, see the fstab (5) manual page. Check the SELinux context of /mnt/brick1/ : The directory is labeled with the unlabeled_t SELinux type. Change the SELinux type of /mnt/brick1/ to the glusterd_brick_t SELinux type: Use the restorecon utility to apply the changes: Finally, verify that the context has been successfully changed:
[ "~]# mkdir /mnt/brick1", "~]# mount /dev/vg-group/gluster /mnt/brick1/", "/dev/vg-group/gluster /mnt/brick1 xfs rw,inode64,noatime,nouuid 1 2", "~]USD ls -lZd /mnt/brick1/ drwxr-xr-x. root root system_u:object_r:unlabeled_t:s0 /mnt/brick1/", "~]# semanage fcontext -a -t glusterd_brick_t \"/mnt/brick1(/.*)?\"", "~]# restorecon -Rv /mnt/brick1", "~]USD ls -lZd /mnt/brick1 drwxr-xr-x. root root system_u:object_r:glusterd_brick_t:s0 /mnt/brick1/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-glusterfs-configuration_examples
4.2. Removing Servers from the Trusted Storage Pool
4.2. Removing Servers from the Trusted Storage Pool Warning Before detaching a peer from the trusted storage pool, make sure that the clients are not using the node. If backup servers were not set at mount time using the backup-volfile-servers option, remount the volume on the client using the IP address or FQDN of another server in the trusted storage pool to avoid inconsistencies. Run gluster peer detach server to remove a server from the storage pool. Removing One Server from the Trusted Storage Pool Remove one server from the Trusted Storage Pool, and check the peer status of the storage pool. Prerequisites The glusterd service must be running on the server targeted for removal from the storage pool. See Chapter 22, Starting and Stopping the glusterd service for service start and stop commands. The host names of the target servers must be resolvable by DNS. Run gluster peer detach [server] to remove the server from the trusted storage pool. Verify the peer status from all servers using the following command:
[ "gluster peer detach (server) All clients mounted through the peer which is getting detached needs to be remounted, using one of the other active peers in the trusted storage pool, this ensures that the client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y peer detach: success", "gluster peer status Number of Peers: 2 Hostname: server2 Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 State: Peer in Cluster (Connected) Hostname: server3 Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/removing_servers_from_the_trusted_storage_pool
A.2.6. ElGamal Encryption
A.2.6. ElGamal Encryption In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie-Hellman key agreement. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. [22] [22] "ElGamal encryption" Wikipedia. 24 February 2010 http://en.wikipedia.org/wiki/ElGamal_encryption
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-elgamal-encryption
4.4. Variable Tracking at Assignments
4.4. Variable Tracking at Assignments Variable Tracking at Assignments (VTA) is a new infrastructure included in GCC used to improve variable tracking during optimizations. This allows GCC to produce more precise, meaningful, and useful debugging information for GDB, SystemTap, and other debugging tools. When GCC compiles code with optimizations enabled, variables are renamed, moved around, or even removed altogether. As such, optimized compiling can cause a debugger to report that some variables have been <optimized out>. With VTA enabled, optimized code is internally annotated to ensure that optimization passes to transparently keep track of each variable's value, regardless of whether the variable is moved or removed. The effect of this is more parameter and variable values available, even for the optimized ( gcc -O2 -g built) code. It also displays the <optimized out> message less. VTA's benefits are more pronounced when debugging applications with inlined functions. Without VTA, optimization could completely remove some arguments of an inlined function, preventing the debugger from inspecting its value. With VTA, optimization will still happen, and appropriate debugging information will be generated for any missing arguments. VTA is enabled by default when compiling code with optimizations and debugging information enabled (that is, gcc -O -g or, more commonly, gcc -O2 -g ). To disable VTA during such builds, add the -fno-var-tracking-assignments . In addition, the VTA infrastructure includes the new gcc option -fcompare-debug . This option tests code compiled by GCC with debug information and without debug information: the test passes if the two binaries are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that -fcompare-debug adds significant cost in compilation time. See man gcc for details about this option. For more information about the infrastructure and development of VTA, see A Plan to Fix Local Variable Debug Information in GCC , available at the following link: http://gcc.gnu.org/wiki/Var_Tracking_Assignments A slide deck version of this whitepaper is also available at http://people.redhat.com/aoliva/papers/vta/slides.pdf .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/ch-debug-vta
function::ntohs
function::ntohs Name function::ntohs - Convert 16-bit short from network to host order Synopsis Arguments x Value to convert
[ "ntohs:long(x:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ntohs
Chapter 8. Configuring maximum transmission unit (MTU) settings
Chapter 8. Configuring maximum transmission unit (MTU) settings 8.1. MTU overview OpenStack Networking can calculate the largest possible maximum transmission unit (MTU) size that you can apply safely to instances. The MTU value specifies the maximum amount of data that a single network packet can transfer; this number is variable depending on the most appropriate size for the application. For example, NFS shares might require a different MTU size to that of a VoIP application. Note You can use the openstack network show <network_name> command to view the largest possible MTU values that OpenStack Networking calculates. net-mtu is a neutron API extension that is not present in some implementations. The MTU value that you require can be advertised to DHCPv4 clients for automatic configuration, if supported by the instance, as well as to IPv6 clients through Router Advertisement (RA) packets. To send Router Advertisements, the network must be attached to a router. You must configure MTU settings consistently from end-to-end. This means that the MTU setting must be the same at every point the packet passes through, including the VM, the virtual network infrastructure, the physical network, and the destination server. For example, the circles in the following diagram indicate the various points where an MTU value must be adjusted for traffic between an instance and a physical server. You must change the MTU value for very interface that handles network traffic to accommodate packets of a particular MTU size. This is necessary if traffic travels from the instance 192.168.200.15 through to the physical server 10.20.15.25 : Inconsistent MTU values can result in several network issues, the most common being random packet loss that results in connection drops and slow network performance. Such issues are problematic to troubleshoot because you must identify and examine every possible network point to ensure it has the correct MTU value. 8.2. Configuring MTU Settings in Director This example demonstrates how to set the MTU using the NIC config templates. You must set the MTU on the bridge, bond (if applicable), interface(s), and VLAN(s): 8.3. Reviewing the resulting MTU calculation You can view the calculated MTU value, which is the largest possible MTU value that instances can use. Use this calculated MTU value to configure all interfaces involved in the path of network traffic.
[ "- type: ovs_bridge name: br-isolated use_dhcp: false mtu: 9000 # <--- Set MTU members: - type: ovs_bond name: bond1 mtu: 9000 # <--- Set MTU ovs_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: ens15f0 mtu: 9000 # <--- Set MTU primary: true - type: interface name: enp131s0f0 mtu: 9000 # <--- Set MTU - type: vlan device: bond1 vlan_id: {get_param: InternalApiNetworkVlanID} mtu: 9000 # <--- Set MTU addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan device: bond1 mtu: 9000 # <--- Set MTU vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}", "openstack network show <network>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/config-max-mtu_rhosp-network
Chapter 8. Configure Network Teaming
Chapter 8. Configure Network Teaming 8.1. Understanding Network Teaming The combining or aggregating of network links to provide a logical link with higher throughput, or to provide redundancy, is known by many names, for example channel bonding , Ethernet bonding , port trunking , channel teaming , NIC teaming , or link aggregation . This concept as originally implemented in the Linux kernel is widely referred to as bonding . The term Network Teaming has been chosen to refer to this new implementation of the concept. The existing bonding driver is unaffected, Network Teaming is offered as an alternative and does not replace bonding in Red Hat Enterprise Linux 7. Note Regarding the Mode 4 Link Aggregation Control Protocol (LACP) teaming mode, requires configuring the switch to aggregate the links. For more details, see https://www.kernel.org/doc/Documentation/networking/bonding.txt Network Teaming, or Team, is designed to implement the concept in a different way by providing a small kernel driver to implement the fast handling of packet flows, and various user-space applications to do everything else in user space. The driver has an Application Programming Interface ( API ), referred to as " Team Netlink API " , which implements Netlink communications. User-space applications can use this API to communicate with the driver. A library, referred to as " lib " , has been provided to do user space wrapping of Team Netlink communications and RT Netlink messages. An application daemon, teamd , which uses the libteam library is also available. One instance of teamd can control one instance of the Team driver. The daemon implements the load-balancing and active-backup logic, such as round-robin, by using additional code referred to as " runners " . By separating the code in this way, the Network Teaming implementation presents an easily extensible and scalable solution for load-balancing and redundancy requirements. For example, custom runners can be relatively easily written to implement new logic through teamd , and even teamd is optional, users can write their own application to use libteam . The teamdctl utility is available to control a running instance of teamd using D-bus. teamdctl provides a D-Bus wrapper around the teamd D-Bus API. By default, teamd listens and communicates using Unix Domain Sockets but still monitors D-Bus. This is to ensure that teamd can be used in environments where D-Bus is not present or not yet loaded. For example, when booting over teamd links, D-Bus would not yet be loaded. The teamdctl utility can be used during run time to read the configuration, the state of link-watchers, check and change the state of ports, add and remove ports, and to change ports between active and backup states. Team Netlink API communicates with user-space applications using Netlink messages. The libteam user-space library does not directly interact with the API, but uses libnl or teamnl to interact with the driver API. To sum up, the instances of Team driver, running in the kernel, do not get configured or controlled directly. All configuration is done with the aid of user space applications, such as the teamd application. The application then directs the kernel driver part accordingly. Note In the context of network teaming, the term port is also known as slave . Port is preferred when using teamd directly while slave is used when using NetworkManager to refer to interfaces which create a team.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming
2.4. Security
2.4. Security KVM virtual machines use the following features to improve their security: SELinux Security-Enhanced Linux, or SELinux, provides Mandatory Access Control (MAC) for all Linux systems, and thus benefits also Linux guests. Under the control of SELinux, all processes and files are given a type , and their access on the system is limited by fine-grained controls of various types. SELinux limits the abilities of an attacker and works to prevent many common security exploits such as buffer overflow attacks and privilege escalation. sVirt sVirt is a technology included in Red Hat Enterprise Linux 7 that integrates SELinux and virtualization. It applies Mandatory Access Control (MAC) to improve security when using virtual machines, and hardens the system against hypervisor bugs that might be used to attack the host or another virtual machine. Note For more information on security in virtualization, see the Red Hat Enterprise Linux 7 Virtualization Security Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-advantages-security
Chapter 2. Driver Toolkit
Chapter 2. Driver Toolkit Learn about the Driver Toolkit and how you can use it as a base image for driver containers for enabling special software and hardware devices on OpenShift Container Platform deployments. 2.1. About the Driver Toolkit Background The Driver Toolkit is a container image in the OpenShift Container Platform payload used as a base image on which you can build driver containers. The Driver Toolkit image includes the kernel packages commonly required as dependencies to build or install kernel modules, as well as a few tools needed in driver containers. The version of these packages will match the kernel version running on the Red Hat Enterprise Linux CoreOS (RHCOS) nodes in the corresponding OpenShift Container Platform release. Driver containers are container images used for building and deploying out-of-tree kernel modules and drivers on container operating systems like RHCOS. Kernel modules and drivers are software libraries running with a high level of privilege in the operating system kernel. They extend the kernel functionalities or provide the hardware-specific code required to control new devices. Examples include hardware devices like Field Programmable Gate Arrays (FPGA) or GPUs, and software-defined storage (SDS) solutions, such as Lustre parallel file systems, which require kernel modules on client machines. Driver containers are the first layer of the software stack used to enable these technologies on Kubernetes. The list of kernel packages in the Driver Toolkit includes the following and their dependencies: kernel-core kernel-devel kernel-headers kernel-modules kernel-modules-extra In addition, the Driver Toolkit also includes the corresponding real-time kernel packages: kernel-rt-core kernel-rt-devel kernel-rt-modules kernel-rt-modules-extra The Driver Toolkit also has several tools that are commonly needed to build and install kernel modules, including: elfutils-libelf-devel kmod binutilskabi-dw kernel-abi-whitelists dependencies for the above Purpose Prior to the Driver Toolkit's existence, users would install kernel packages in a pod or build config on OpenShift Container Platform using entitled builds or by installing from the kernel RPMs in the hosts machine-os-content . The Driver Toolkit simplifies the process by removing the entitlement step, and avoids the privileged operation of accessing the machine-os-content in a pod. The Driver Toolkit can also be used by partners who have access to pre-released OpenShift Container Platform versions to prebuild driver-containers for their hardware devices for future OpenShift Container Platform releases. The Driver Toolkit is also used by the Kernel Module Management (KMM), which is currently available as a community Operator on OperatorHub. KMM supports out-of-tree and third-party kernel drivers and the support software for the underlying operating system. Users can create modules for KMM to build and deploy a driver container, as well as support software like a device plugin, or metrics. Modules can include a build config to build a driver container-based on the Driver Toolkit, or KMM can deploy a prebuilt driver container. 2.2. Pulling the Driver Toolkit container image The driver-toolkit image is available from the Container images section of the Red Hat Ecosystem Catalog and in the OpenShift Container Platform release payload. The image corresponding to the most recent minor release of OpenShift Container Platform will be tagged with the version number in the catalog. The image URL for a specific release can be found using the oc adm CLI command. 2.2.1. Pulling the Driver Toolkit container image from registry.redhat.io Instructions for pulling the driver-toolkit image from registry.redhat.io with podman or in OpenShift Container Platform can be found on the Red Hat Ecosystem Catalog . The driver-toolkit image for the latest minor release are tagged with the minor release version on registry.redhat.io , for example: registry.redhat.io/openshift4/driver-toolkit-rhel8:v4.15 . 2.2.2. Finding the Driver Toolkit image URL in the payload Prerequisites You obtained the image pull secret from Red Hat OpenShift Cluster Manager . You installed the OpenShift CLI ( oc ). Procedure Use the oc adm command to extract the image URL of the driver-toolkit corresponding to a certain release: For an x86 image, the command is as follows: USD oc adm release info quay.io/openshift-release-dev/ocp-release:4.15.z-x86_64 --image-for=driver-toolkit For an ARM image, the command is as follows: USD oc adm release info quay.io/openshift-release-dev/ocp-release:4.15.z-aarch64 --image-for=driver-toolkit Example output quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b53883ca2bac5925857148c4a1abc300ced96c222498e3bc134fe7ce3a1dd404 Obtain this image using a valid pull secret, such as the pull secret required to install OpenShift Container Platform: USD podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA> 2.3. Using the Driver Toolkit As an example, the Driver Toolkit can be used as the base image for building a very simple kernel module called simple-kmod . Note The Driver Toolkit includes the necessary dependencies, openssl , mokutil , and keyutils , needed to sign a kernel module. However, in this example, the simple-kmod kernel module is not signed and therefore cannot be loaded on systems with Secure Boot enabled. 2.3.1. Build and run the simple-kmod driver container on a cluster Prerequisites You have a running OpenShift Container Platform cluster. You set the Image Registry Operator state to Managed for your cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Create a namespace. For example: USD oc new-project simple-kmod-demo The YAML defines an ImageStream for storing the simple-kmod driver container image, and a BuildConfig for building the container. Save this YAML as 0000-buildconfig.yaml.template . apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo Substitute the correct driver toolkit image for the OpenShift Container Platform version you are running in place of "DRIVER_TOOLKIT_IMAGE" with the following commands. USD OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version}) USD DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit) USD sed "s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml Create the image stream and build config with USD oc create -f 0000-buildconfig.yaml After the builder pod completes successfully, deploy the driver container image as a DaemonSet . The driver container must run with the privileged security context in order to load the kernel modules on the host. The following YAML file contains the RBAC rules and the DaemonSet for running the driver container. Save this YAML as 1000-drivercontainer.yaml . apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: ["modprobe", "-v", "-a" , "simple-kmod", "simple-procfs-kmod"] preStop: exec: command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: "" Create the RBAC rules and daemon set: USD oc create -f 1000-drivercontainer.yaml After the pods are running on the worker nodes, verify that the simple_kmod kernel module is loaded successfully on the host machines with lsmod . Verify that the pods are running: USD oc get pod -n simple-kmod-demo Example output NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s Execute the lsmod command in the driver container pod: USD oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 2.4. Additional resources For more information about configuring registry storage for your cluster, see Image Registry Operator in OpenShift Container Platform .
[ "oc adm release info quay.io/openshift-release-dev/ocp-release:4.15.z-x86_64 --image-for=driver-toolkit", "oc adm release info quay.io/openshift-release-dev/ocp-release:4.15.z-aarch64 --image-for=driver-toolkit", "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b53883ca2bac5925857148c4a1abc300ced96c222498e3bc134fe7ce3a1dd404", "podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>", "oc new-project simple-kmod-demo", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo", "OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})", "DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)", "sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml", "oc create -f 0000-buildconfig.yaml", "apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc create -f 1000-drivercontainer.yaml", "oc get pod -n simple-kmod-demo", "NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s", "oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple", "simple_procfs_kmod 16384 0 simple_kmod 16384 0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/specialized_hardware_and_driver_enablement/driver-toolkit
Part III. Administering the Environment
Part III. Administering the Environment
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/part-administering_the_environment
8.177. resource-agents
8.177. resource-agents 8.177.1. RHBA-2013:1541 - resource-agents bug fix and enhancement update Updated resource-agents packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability environment for both Pacemaker and rgmanager service managers. Bug Fixes BZ# 784933 Previously, when the exportfs utility was used to relocate an exported share, the size of the /var/llib/nfs/rmtab file was doubled. This bug has been fixed and the /var/lib/nfs/rmtab file size is no longer doubled in the aforementioned scenario. BZ# 851188 Prior to this update, the fs-lib.sh agent did not recognize the trailing slash ("/") character when searching for devices in the /proc/mounts file. Consequently, NFSv4 mounts were not monitored. With this update, fs-lib.sh has been modified to track the slash characters. As a result, NFSv4 mounts are managed and monitored as expected. BZ# 853220 Due to a bug in the oracledb.sh script, when there were multiple ORACLE instances running in the same home directory, the script produced unnecessary delays. The bug has been fixed, and oracledb.sh now works without delays when multiple ORACLE instances are present in the home directory. BZ# 871659 To shutdown cleanly, the postgres agent needs to receive the SIGINT signal. Previously, this signal was not sent and postgres performed a hard shutdown instead of a graceful exit. This behavior has been modified, and SIGINT is now sent to postgres on shutdown to attempt a graceful exit, and after a period of time, the SIGQUIT signal is sent if the agent is still active. As a result, postgres performs graceful shutdown during the stop action. BZ# 884326 Previously, if a device failed in a non-reduntant (that is not mirror or RAID) logical volume (LV) that was controlled by the HA-LVM utility, the entire LV could be automatically deleted from the volume group. This bug has been fixes and now, if a non-redundant logical volume suffers a device failure, HA-LVM fails to start the service rather than forcing the removal of failed LVs from the volume group. BZ# 895075 Prior to this update, the ip.sh agent did not configure IPv6 addresses that contained upper-case letters. Consequently, a resource with such an address failed. With this update, ip.sh has been modified to be case insensitive for IPv6 addresses. As a result, IPv6 addresses with upper case letters are now configured properly by ip.sh . BZ# 908457 Previously, agents based on the fs-lib.sh script, such as ip.sh , ignored the self_fence option when the force_unmount option was enabled. Consequently, the configured self_fence option was not enabled. This bug has been fixed and self_fence is accepted regardless of force_unmount . BZ# 948730 With this update, the priority level of log messages produced by the mount utility has been changed from error to more appropriate debug level. BZ# 959520 Due to an incorrect SELinux context of the /var/lib/nfs/statd/sm/ directory, the rpc.statd daemon was unable to start. This problem only appeared if the cluster included NFS mounts. This update modifies how files are copied to the /var/lib/nfs/statd/sm/ directory, so that the SELinux context is inherited from the target directory. As a result, rpc.statd can now be started without complications. BZ# 974941 When autofs maps are used for network storage, agents for cluster file systems ("fs") such as netfs.sh , fs.sh , or clusterfs.sh require the use_findmnt option set to 'false' . Previously, when use_findmnt was set incorrectly, and autofs maps became unavailable, the rgmanager services with "fs" resources consequently became unresponsive until the network was restored. The underlying source code has been modified and rgmanager services no longer hang in the aforementioned scenario. BZ# 976443 Prior to this update, the lvm.sh agent was unable to accurately detect a tag represented by a cluster node. Consequently, the active logical volume on a cluster node failed when another node rejoined the cluster. With this update, lvm.sh properly detects whether tags represent a cluster node. As a result, when nodes rejoin the cluster, the volume group no longer fails on other nodes. BZ# 981717 When multiple instances of the tomcat-6 service were used as cluster resources, the TOMCAT_USER setting in custom /conf/tomcat6.conf configuration files was ignored. Consequently, each instance always started with TOMCAT_USER set to root . This bug has been fixed, and TOMCAT_USER is now applied properly in the described case. BZ# 983273 Under certain circumstances, when the tomcat.conf configuration file for a tomcat-6 resource was stored on a shared storage resource that became unavailable, the subsequent stop operation on tomcat-6 failed. This bug has been fixed, and tomcat-6 can now be successfully stopped when tomcat.conf is not readable. BZ# 998012 File system based resources, such as fs.sh or clusterfs.sh , required usage of the /tmp directory during status monitoring. If this directory became full after mounting the file system, the monitor action failed even though the file system was correctly mounted. The /tmp directory is no longer used during file system monitors, thus fixing this bug. BZ# 1009772 If rgmanager was started simultaneously on two nodes, these nodes could both execute the lvchange --deltag command at the same time and corrupt the LVM headers. With this update, LVM headers do not become corrupt even when rgmanager stars on two nodes at the same time. BZ# 1014298 Previously, when the NFS server was unresponsive, the fuser utility could block the unmounting of an NFS file system. With this update, fuser has been replaced with custom logic that searches for processes with open file descriptors to an NFS mount, thus fixing this bug. Enhancements BZ# 670022 With this update, support for Oracle Database 11g has been added to the oracledb , orainstance , and oralistener resource agents. BZ# 711586 This update adds the new update-source option to the named.sa agent. With this option enabled, it is possible to set the notify-source , transfer-source , and query-source to the service cluster IP. BZ# 909954 With this update, the lock file for the /usr/share/cluster/orainstance.sh script has been moved from the /tmp/ directory to /var/tmp/ . BZ# 917807 With this update, the TNS_ADMIN variable has been added to the oracledb.sh cluster script. This variable is a standard Oracle feature to set a specific path to the listener configuration file. BZ# 919231 This update improves performance of start, stop, and monitor actions for file system resources. The file system resources use of the findmnt utility to speed up migrations on clusters with a large amount of file system resources. BZ# 989284 This update adds official support for the ocf heartbeat resource agents required for use with the Pacemaker cluster manager. Only officially supported agents are present for this initial release. This means heartbeat agents lacking official support are not shipped in this update. Users of resource-agents are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 8.177.2. RHBA-2013:1746 - resource-agents bug fix update Updated resource-agents packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability environment for both Pacemaker and rgmanager service managers. Bug Fixes BZ# 1027410 Prior to this update, the netfs agent could hang during a stop operation, even with the self_fence option enabled. With this update, self fence operation is executed sooner in the process, which ensures that NFS client detects server leaving if umount can not succeed, and self fencing occurs. BZ# 1027412 Previously, the IPaddr2 agent did not send out unsolicited neighbor advertisements to announce a link-layer address change. Consequently, floating IPv6 addresses, which require this functionality, could not work correctly. To fix this bug, the send_ua internal binary required for IPaddr2 agent to drive IPv6 addresses has been added. As a result, the floating IPv6 addresses now work correctly, and IPv4 addresses are left unaffected by this change. Users of resource-agents are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/resource-agents
9.2. Analyzing Security Needs
9.2. Analyzing Security Needs Analyze the environment and users to identify specific security needs. The site survey in Chapter 3, Designing the Directory Schema clarifies some basic decisions about who can read and write the individual pieces of data in the directory. This information forms the basis of the security design. The way security is implemented also depends on how the directory service is used to support the business. A directory that serves an intranet does not require the same security measures as a directory that supports an extranet or e-commerce applications that are open to the Internet. If the directory only serves an intranet, consider what level of access is needed for information: How to provide users and applications with access to the information they need to perform their jobs. How to protect sensitive data regarding employees or the business from general access. If the directory serves an extranet or supports e-commerce applications over the Internet, there are additional points to consider: How to offer customers a guarantee of privacy. How to guarantee information integrity. The following sections provide information about analyzing security needs. 9.2.1. Determining Access Rights The data analysis identifies what information users, groups, partners, customers, and applications need to access the directory service. Access rights can be granted in one of two ways: Grant all categories of users as many rights as possible while still protecting sensitive data. An open method requires accurately determining what data are sensitive or critical to the business. Grant each category of users the minimum access they require to do their jobs. A restrictive method requires minutely understanding the information needs of each category of user inside, and possibly outside, of the organization. Irrespective of the method used to determine access rights, create a simple table that lists the categories of users in the organization and the access rights granted to each. Consider creating a table that lists the sensitive data held in the directory and, for each piece of data, the steps taken to protect it. For information about checking the identity of users, see Section 9.4, "Selecting Appropriate Authentication Methods" . For information about restricting access to directory information, see Section 9.7, "Designing Access Control" 9.2.2. Ensuring Data Privacy and Integrity When using the directory to support exchanges with business partners over an extranet or to support e-commerce applications with customers on the Internet, ensure the privacy and the integrity of the data exchanged. There are several ways to do this: By encrypting data transfers. By using certificates to sign data transfers. For information about encryption methods provided in Directory Server, see Section 9.6.2.11, "Password Storage Schemes" For information about signing data, see Section 9.9, "Securing Server Connections" . For information about encrypting sensitive information as it is stored in the Directory Server database, see Section 9.8, "Encrypting the Database" 9.2.3. Conducting Regular Audits As an extra security measure, conduct regular audits to verify the efficiency of the overall security policy by examining the log files and the information recorded by the SNMP agents. For more information about SNMP, see the Red Hat Directory Server Administration Guide . For more information about log files and SNMP, see the Red Hat Directory Server Administration Guide . 9.2.4. Example Security Needs Analysis The examples provided in this section illustrate how the imaginary ISP company "example.com" analyzes its security needs. example.com's business is to offer web hosting and Internet access. Part of example.com's activity is to host the directories of client companies. It also provides Internet access to a number of individual subscribers. Therefore, example.com has three main categories of information in its directory: example.com internal information Information belonging to corporate customers Information pertaining to individual subscribers example.com needs the following access controls: Provide access to the directory administrators of hosted companies (example_a and example_b) to their own directory information. Implement access control policies for hosted companies' directory information. Implement a standard access control policy for all individual clients who use example.com for Internet access from their homes. Deny access to example.com's corporate directory to all outsiders. Grant read access to example.com's directory of subscribers to the world.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_a_secure_directory-analyzing_your_security_needs
Chapter 1. Console APIs
Chapter 1. Console APIs 1.1. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.2. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.3. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. ConsoleNotification [console.openshift.io/v1] Description ConsoleNotification is the extension for configuring openshift web console notifications. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.7. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/console-apis
Chapter 7. Supported integration products
Chapter 7. Supported integration products AMQ Streams 1.6 supports integration with the following Red Hat products. Red Hat Single Sign-On 7.4 and later for OAuth 2.0 authentication and OAuth 2.0 authorization Red Hat 3scale API Management 2.6 and later to secure the Kafka Bridge and provide additional API management features Red Hat Debezium 1.0 and later for monitoring databases and creating event streams Service Registry 2020-Q4 and later as a centralized store of service schemas for data streaming For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the AMQ Streams 1.6 documentation.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_openshift/supported-config-str
Chapter 1. Introduction
Chapter 1. Introduction A virtual machine is a software implementation of a computer. The Red Hat Virtualization environment enables you to create virtual desktops and virtual servers. Virtual machines consolidate computing tasks and workloads. In traditional computing environments, workloads usually run on individually administered and upgraded servers. Virtual machines reduce the amount of hardware and administration required to run the same computing tasks and workloads. 1.1. Audience Most virtual machine tasks in Red Hat Virtualization can be performed in both the VM Portal and Administration Portal. However, the user interface differs between each portal, and some administrative tasks require access to the Administration Portal. Tasks that can only be performed in the Administration Portal will be described as such in this book. Which portal you use, and which tasks you can perform in each portal, is determined by your level of permissions. Virtual machine permissions are explained in Section 6.8, "Virtual Machines and Permissions" . The VM Portal's user interface is described in the Introduction to the VM Portal . The Administration Portal's user interface is described in the Administration Guide . The creation and management of virtual machines through the Red Hat Virtualization REST API is documented in the REST API Guide .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/chap-introduction
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 2-82 Mon Jun 4 2018 Marek Suchanek Asynchronous update Revision 2-81 Wed Mar 21 2018 Marek Suchanek Asynchronous update Revision 2-70 Mon Mar 13 2017 Milan Navratil Preparing document for 6.9 GA publication. Revision 2-64 Thu May 10 2016 Milan Navratil Preparing document for 6.8 GA publication. Revision 2-63 Thu Mar 31 2016 Milan Navratil Asynchronous update with multiple minor improvements. Revision 2-52 Wed Mar 25 2015 Jacquelynn East Added ext back up and restore chapters Revision 2-51 Thu Oct 9 2014 Jacquelynn East Version for 6.6 GA release Revision 2-38 Mon Nov 18 2013 Jacquelynn East Build for 6.5 GA Revision 2-35 Thu Sep 05 2013 Jacquelynn East Added a section documenting fsck. Revision 2-11 Mon Feb 18 2013 Jacquelynn East Version for 6.4 GA release. Revision 2-1 Fri Oct 19 2012 Jacquelynn East Branched for 6.4 Beta. Created new edition based on significant structual reordering. Revision 1-45 Mon Jun 18 2012 Jacquelynn East Version for 6.3 release.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/appe-publican-revision_history
2.2. Create Materialized Views
2.2. Create Materialized Views You can create materialized views and their corresponding physical materialized target tables in the Teiid Designer JBDS plug-in. This can be done through setting the materialized and target table manually, or by selecting the desired views, right-clicking, then selecting Modeling - Create Materialized Views . , generate the DDL for your physical model materialization target tables. This can be done by selecting the model, right clicking, then choosing Export -> Metadata Modeling -> Data Definition Language (DDL) File . Use this script to create the chema for your materialization target on your source. Determine a load and refresh strategy. With the schema created the simplest approach is to load the data. You can even load it through Red Hat JBoss Data Virtualization: insert into target_table select * from matview option nocache matview That however may be too simplistic because your index creation may perform better if deferred until after the table has been created. Also full snapshot refreshes are best done to a staging table then swapping it for the existing physical table to ensure that the refresh does not impact user queries and to ensure that the table is valid prior to use. Metadata-Based Materialization Management Users when they are designing their views, they can define additional metadata on their views to control the loading and refreshing of external materialization cache. This option provides a limited but a powerful way to manage the materialization views. For this purpose, SYSADMIN Schema model in your VDB defines three stored procedures (loadMatView, updateMatView, matViewStatus) in its schema. Based on the defined metadata on the view, and these SYSADMIN procedures a simple scheduler automatically starts during the VDB deployment and loads and keeps the cache fresh. To manage and report the loading and refreshing activity of materialization view, Red Hat JBoss Data Virtualization expects the user to define "Status" table with following schema in any one of the source models. Create this table on the physical database, before you do the import of this physical source. Note MariaDB have Silent Column Changes function, according to MariaDB document, 'long' type will silently change to 'MEDIUMTEXT' , so If execute above schema in MariaDB, 'Cardinality' and 'LoadNumber' column should change to 'bigint' type. Create Views and corresponding physical materialized target tables in Designer or using DDL in Dynamic VDB. This can be done through setting the materialized and target table manually, or by selecting the desired views, right clicking, then selecting Modeling->"Create Materialized Views" in the Designer. Define the following extension properties on the view. Table 2.1. Extension Properties Property Description Optional Default teiid_rel:ALLOW_MATVIEW_MANAGEMENT Allow Red Hat JBoss Data Virtualization-based management False False teiid_rel:MATVIEW_STATUS_TABLE fully qualified Status Table Name defined above False NA teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT semi-colon(;) separated DDL/DML commands to run before the actual load of the cache, typically used to truncate staging table True When not defined, no script will be run teiid_rel:MATVIEW_LOAD_SCRIPT semi-colon(;) separated DDL/DML commands to run for loading of the cache True will be determined based on view transformation teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT semi-colon(;) separated DDL/DML commands to run after the actual load of the cache. Typically used to rename staging table to actual cache table. Required when MATVIEW_LOAD_SCRIPT is not defined in order to copy data from the teiid_rel:MATVIEW_STAGE_TABLE the MATVIEW table. True When not defined, no script will be run teiid_rel:MATVIEW_SHARE_SCOPE Allowed values are {NONE, VDB, SCHEMA}, which define if the cached contents are shared among different VDB versions and different VDBs as long as schema names match True None teiid_rel:MATERIALIZED_STAGE_TABLE When MATVIEW_LOAD_SCRIPT property not defined, Red Hat JBoss Data Virtualization loads the cache contents into this table. Required when MATVIEW_LOAD_SCRIPT not defined False NA teiid_rel:ON_VDB_START_SCRIPT DML commands to run start of vdb True NA teiid_rel:ON_VDB_DROP_SCRIPT DML commands to run at VDB un-deploy; typically used for cleaning the cache/status tables True NA teiid_rel:MATVIEW_ONERROR_ACTION Action to be taken when mat view contents are requested but cache is invalid. Allowed values are (THROW_EXCEPTION = throws an exception, IGNORE = ignores the warning and supplied invalidated data, WAIT = waits until the data is refreshed and valid then provides the updated data) True WAIT teiid_rel:MATVIEW_TTL time to live in milliseconds. Provide property or cache hint on view transformation - property takes precedence. True 2^63 milliseconds - effectively the table will not refresh, but will be loaded a single time initially Once the VDB (with a model with the properties specified above) has been defined and deployed, the following sequence of events will take place. Upon the VDB deployment, teiid_rel:ON_VDB_START_SCRIPT will be run on completion of the deployment. Based on the teiid_rel:MATVIEW_TTL defined a scheduler entry will be created to run SYSADMIN.loadMatView procedure, which loads the cache contents. This procedure first inserts/updates an entry in teiid_rel:MATVIEW_STATUS_TABLE, which indicates that the cache is being loaded, then teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT will be run if defined. , teiid_rel:MATVIEW_LOAD_SCRIPT will be run if defined, otherwise one will be automatically created based on the view's transformation logic. Then, teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT will be run, to close out and create any indexes on the mat view table. The procedure then sets the teiid_rel:MATVIEW_STATUS_TABLE entry to "LOADED" and valid. Based on the teiid_rel:MATVIEW_TTL, the SYSADMIN.matViewStatus is run by the Scheduler, to queue further cache re-loads. When VDB is un-deployed (not when server is restarted) the teiid_rel:ON_VDB_DROP_SCRIPT script will be run. Run the SYSADMIN.updateMatView procedure at any time to partially update the cache contents rather than complete refresh of contents with SYSADMIN.loadMatview procedure. When partial update is run the cache expiration time is renewed for new term based on Cache Hint again. Here is a sample dynamic VDB: 2.2.1. Configure External Materialization In Teiid Designer Build a VDB using the Teiid Designer for your use case. Identify all the "Virtual Tables", that you think can use caching, Click on the table, then in the Properties panel, switch the Materialized property to "true". Right click on each materialized table, then choose Modeling - Create Materialized Views . Click on ... button on the Materialization Model input box. Select a "physical model" that already exists or create a new name for "physical model". Click Finish . This will create the new model (if applicable) and a table with exact schema as your selected virtual table. Verify that the "Materialization Table" property is now updated with name of table that has just been created. Navigate to the new materialized table that has been created, and click on "Name In Source" property and change it from "MV1000001" to "mv_{your_table_name}". Typically this could be same name as your virtual table name, (for example, you might name it "mv_UpdateProduct".) Save your model. Note The data source this materialized view physical model represents will be the data source for storing the materialized tables. You can select different "physical models" for different materialized tables, creating multiple places to store your materialized tables. Once you are have finished creating all materialized tables, right click on each model (in most cases this will be a single physical model used for all the materialized views), then use Export - Teiid Designer - Data Definition Language (DDL) File to generate the DDL for the physical model. Select the type of the database and DDL file name and click Finish . A DDL file that contains all of the "create table" commands is generated.. Use your favorite "client" tool for your database and create the database using the DDL file created. Go back to your VDB and configure the data source and translator for the "materialized" physical model to the database you just created. Once finished, deploy the VDB to the Red Hat JBoss Data Virtualization Server and make sure that it is correctly configured and active. Important It is important to ensure that the key/index information is defined as this will be used by the materialization process to enhance the performance of the materialized table.
[ "CREATE TABLE status ( VDBName varchar(50) not null, VDBVersion integer not null, SchemaName varchar(50) not null, Name varchar(256) not null, TargetSchemaName varchar(50), TargetName varchar(256) not null, Valid boolean not null, LoadState varchar(25) not null, Cardinality long, Updated timestamp not null, LoadNumber long not null, PRIMARY KEY (VDBName, VDBVersion, SchemaName, Name) );", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <vdb name=\"sakila\" version=\"1\"> <description>Shows how to call JPA entities</description> <model name=\"pg\"> <source name=\"pg\" translator-name=\"postgresql-override\" connection-jndi-name=\"java:/sakila-ds\"/> </model> <model name=\"sakila\" type=\"VIRTUAL\"> <metadata type=\"DDL\"><![CDATA[ CREATE VIEW actor ( actor_id integer, first_name varchar(45) NOT NULL, last_name varchar(45) NOT NULL, last_update timestamp NOT NULL ) OPTIONS (MATERIALIZED 'TRUE', UPDATABLE 'TRUE', MATERIALIZED_TABLE 'pg.public.mat_actor', \"teiid_rel:MATERIALIZED_STAGE_TABLE\" 'pg.public.mat_actor_staging', \"teiid_rel:ALLOW_MATVIEW_MANAGEMENT\" 'true', \"teiid_rel:MATVIEW_STATUS_TABLE\" 'pg.public.status', \"teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT\" 'execute pg.native(''truncate table mat_actor_staging'');', \"teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT\" 'execute pg.native(''ALTER TABLE mat_actor RENAME TO mat_actor_temp'');execute pg.native(''ALTER TABLE mat_actor_staging RENAME TO mat_actor'');execute pg.native(''ALTER TABLE mat_actor_temp RENAME TO mat_actor_staging;'')', \"teiid_rel:MATVIEW_SHARE_SCOPE\" 'NONE', \"teiid_rel:MATVIEW_ONERROR_ACTION\" 'THROW_EXCEPTION', \"teiid_rel:MATVIEW_TTL\" 300000, \"teiid_rel:ON_VDB_DROP_SCRIPT\" 'DELETE FROM pg.public.status WHERE Name=''actor'' AND schemaname = ''sakila''') AS SELECT actor_id, first_name, last_name, last_update from pg.\"public\".actor; </metadata> </model> <translator name=\"postgresql-override\" type=\"postgresql\"> <property name=\"SupportsNativeQueries\" value=\"true\"/> </translator> </vdb>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/external_materialization_usage_steps
function::sprint_ubacktrace
function::sprint_ubacktrace Name function::sprint_ubacktrace - Return stack back trace for current task as string. EXPERIMENTAL! Synopsis Arguments None Description Returns a simple backtrace for the current task. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_ubacktrace . Equivalent to sprint_ustack( ubacktrace ), but more efficient (no need to translate between hex strings and final backtrace string). Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data.
[ "function sprint_ubacktrace:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sprint-ubacktrace
Chapter 4. FlowSchema [flowcontrol.apiserver.k8s.io/v1beta3]
Chapter 4. FlowSchema [flowcontrol.apiserver.k8s.io/v1beta3] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowSchemaSpec describes how the FlowSchema's specification looks like. status object FlowSchemaStatus represents the current state of a FlowSchema. 4.1.1. .spec Description FlowSchemaSpec describes how the FlowSchema's specification looks like. Type object Required priorityLevelConfiguration Property Type Description distinguisherMethod object FlowDistinguisherMethod specifies the method of a flow distinguisher. matchingPrecedence integer matchingPrecedence is used to choose among the FlowSchemas that match a given request. The chosen FlowSchema is among those with the numerically lowest (which we take to be logically highest) MatchingPrecedence. Each MatchingPrecedence value must be ranged in [1,10000]. Note that if the precedence is not specified, it will be set to 1000 as default. priorityLevelConfiguration object PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. rules array rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. rules[] object PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. 4.1.2. .spec.distinguisherMethod Description FlowDistinguisherMethod specifies the method of a flow distinguisher. Type object Required type Property Type Description type string type is the type of flow distinguisher method The supported types are "ByUser" and "ByNamespace". Required. 4.1.3. .spec.priorityLevelConfiguration Description PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. Type object Required name Property Type Description name string name is the name of the priority level configuration being referenced Required. 4.1.4. .spec.rules Description rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. Type array 4.1.5. .spec.rules[] Description PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. Type object Required subjects Property Type Description nonResourceRules array nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. nonResourceRules[] object NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. resourceRules array resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. resourceRules[] object ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. subjects array subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. subjects[] object Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. 4.1.6. .spec.rules[].nonResourceRules Description nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. Type array 4.1.7. .spec.rules[].nonResourceRules[] Description NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. Type object Required verbs nonResourceURLs Property Type Description nonResourceURLs array (string) nonResourceURLs is a set of url prefixes that a user should have access to and may not be empty. For example: - "/healthz" is legal - "/hea*" is illegal - "/hea" is legal but matches nothing - "/hea/ " also matches nothing - "/healthz/ " matches all per-component health checks. "*" matches all non-resource urls. if it is present, it must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs. If it is present, it must be the only entry. Required. 4.1.8. .spec.rules[].resourceRules Description resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. Type array 4.1.9. .spec.rules[].resourceRules[] Description ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. Type object Required verbs apiGroups resources Property Type Description apiGroups array (string) apiGroups is a list of matching API groups and may not be empty. "*" matches all API groups and, if present, must be the only entry. Required. clusterScope boolean clusterScope indicates whether to match requests that do not specify a namespace (which happens either because the resource is not namespaced or the request targets all namespaces). If this field is omitted or false then the namespaces field must contain a non-empty list. namespaces array (string) namespaces is a list of target namespaces that restricts matches. A request that specifies a target namespace matches only if either (a) this list contains that target namespace or (b) this list contains " ". Note that " " matches any specified namespace but does not match a request that does not specify a namespace (see the clusterScope field for that). This list may be empty, but only if clusterScope is true. resources array (string) resources is a list of matching resources (i.e., lowercase and plural) with, if desired, subresource. For example, [ "services", "nodes/status" ]. This list may not be empty. "*" matches all resources and, if present, must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs and, if present, must be the only entry. Required. 4.1.10. .spec.rules[].subjects Description subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. Type array 4.1.11. .spec.rules[].subjects[] Description Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. Type object Required kind Property Type Description group object GroupSubject holds detailed information for group-kind subject. kind string kind indicates which one of the other fields is non-empty. Required serviceAccount object ServiceAccountSubject holds detailed information for service-account-kind subject. user object UserSubject holds detailed information for user-kind subject. 4.1.12. .spec.rules[].subjects[].group Description GroupSubject holds detailed information for group-kind subject. Type object Required name Property Type Description name string name is the user group that matches, or "*" to match all user groups. See https://github.com/kubernetes/apiserver/blob/master/pkg/authentication/user/user.go for some well-known group names. Required. 4.1.13. .spec.rules[].subjects[].serviceAccount Description ServiceAccountSubject holds detailed information for service-account-kind subject. Type object Required namespace name Property Type Description name string name is the name of matching ServiceAccount objects, or "*" to match regardless of name. Required. namespace string namespace is the namespace of matching ServiceAccount objects. Required. 4.1.14. .spec.rules[].subjects[].user Description UserSubject holds detailed information for user-kind subject. Type object Required name Property Type Description name string name is the username that matches, or "*" to match all usernames. Required. 4.1.15. .status Description FlowSchemaStatus represents the current state of a FlowSchema. Type object Property Type Description conditions array conditions is a list of the current states of FlowSchema. conditions[] object FlowSchemaCondition describes conditions for a FlowSchema. 4.1.16. .status.conditions Description conditions is a list of the current states of FlowSchema. Type array 4.1.17. .status.conditions[] Description FlowSchemaCondition describes conditions for a FlowSchema. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 4.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas DELETE : delete collection of FlowSchema GET : list or watch objects of kind FlowSchema POST : create a FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas GET : watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name} DELETE : delete a FlowSchema GET : read the specified FlowSchema PATCH : partially update the specified FlowSchema PUT : replace the specified FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas/{name} GET : watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name}/status GET : read status of the specified FlowSchema PATCH : partially update status of the specified FlowSchema PUT : replace status of the specified FlowSchema 4.2.1. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of FlowSchema Table 4.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.3. Body parameters Parameter Type Description body DeleteOptions schema Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind FlowSchema Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK FlowSchemaList schema 401 - Unauthorized Empty HTTP method POST Description create a FlowSchema Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.8. Body parameters Parameter Type Description body FlowSchema schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 202 - Accepted FlowSchema schema 401 - Unauthorized Empty 4.2.2. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas Table 4.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the FlowSchema Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a FlowSchema Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FlowSchema Table 4.17. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FlowSchema Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.19. Body parameters Parameter Type Description body Patch schema Table 4.20. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FlowSchema Table 4.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.22. Body parameters Parameter Type Description body FlowSchema schema Table 4.23. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty 4.2.4. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas/{name} Table 4.24. Global path parameters Parameter Type Description name string name of the FlowSchema Table 4.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name}/status Table 4.27. Global path parameters Parameter Type Description name string name of the FlowSchema Table 4.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified FlowSchema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified FlowSchema Table 4.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.31. Body parameters Parameter Type Description body Patch schema Table 4.32. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified FlowSchema Table 4.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.34. Body parameters Parameter Type Description body FlowSchema schema Table 4.35. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/schedule_and_quota_apis/flowschema-flowcontrol-apiserver-k8s-io-v1beta3
Chapter 13. Authentication and Interoperability
Chapter 13. Authentication and Interoperability Improved Identity Management Cross-Realm Trusts to Active Directory The following improvements have been implemented in cross-realm trusts to Active Directory feature of Red Hat Enterprise Linux: Multiple Active Directory domains are supported in the trusted forest; Access of users belonging to separate Active Directory domains in the trusted forest can be selectively disabled and enabled per-domain level; Manually defined POSIX identifiers for users and groups from trusted Active Directory domains can be used instead of automatically assigned identifiers; Active Directory users and groups coming from the trusted domains can be exported to legacy POSIX systems through LDAP compatibility tree; For Active Directory users exported through LDAP compatibility tree, authentication can be performed against Identity Management LDAP server. As a result, both Identity Management and trusted Active Directory users are accessible to legacy POSIX systems in a unified way. Support of POSIX User and Group IDs In Active Directory Identity Management implementation of cross-realm trusts to Active Directory supports existing POSIX user and group ID attributes in Active Directory. When explicit mappings are not defined on the Active Directory side, algorithmic mapping based on the user or group Security Identifier (SID) is applied. Use of AD and LDAP sudo Providers The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file. Support of CA-Less Installations IPA supports installing without an embedded Certificate Authority with user-provided SSL certificates for the HTTP servers and Directory Servers. The administrator is responsible for issuing and rotating services and hosts certificates manually. FreeIPA GUI Improvements Red Hat Enterprise Linux 7 brings a number of improvements to FreeIPA graphical interface, from which the most notable are the following: All dialog windows can be confirmed by the Enter key even when the appropriate button or the dialog window does not have the focus; Loading of web UI is significantly faster because of compression of web UI assets and RPC communication; Drop-down lists can be controlled by keyboard. Reclaiming IDs of Deleted Replicas User and group ID ranges that belong to deleted replicas can be transferred to a suitable replica if one exists. This prevents potential exhaustion of the ID space. Additionally, ID ranges can be managed manually with the ipa-replica-manage tool. Re-Enrolling Clients Using Existing Keytab Files A host that has been recreated and does not have its host entry disabled or removed can be re-enrolled using a previously backed up keytab file. This ensures easy re-enrolling of the IPA client system after the user rebuilds it. Prompt for DNS During server interactive installation, the user is asked whether to install the DNS component. Previously, the DNS feature was installed only when the --setup-dns option was passed to the installer, leading to users not being aware of the feature. Enhanced SSHFP DNS Records DNS support in Identity Management was extended with support for the RFC 6954 standard. This allows users to publish Elliptic Curve Digital Signature Algorithm (ECDSA) keys and SHA-256 hashes in SSH fingerprint (SSHFP) records. Filtering Groups by Type New flags, --posix , --nonposix , --external , can be used to filter groups by type: POSIX group is a group with the posixGroup object class; Non-POSIX group is a group which is not POSIX or external, which means the group does not have the posixGroup or ipaExternalGroup object class; External group is a group with the ipaExternalGroup class. Improved Integration with the External Provisioning Systems External provisioning systems often require extra data to correctly process hosts. A new free-form text field, class has been added to the host entries. This field can be used in automatic membership rules. CRL and OCSP DNS Name in Certificate Profiles A round-robin DNS name for the IPA Certificate Authority (CA) now points to all active IPA CA masters. The name is used for CRL and OCSP URIs in the IPA certificate profile. When any of the IPA CA masters is removed or unavailable, it does not affect the ability to check revocation status of any of the certificates issued by the IPA CA. Certificates Search The cert-find command no longer restricts users to searching certificates only by their serial number, but now also by: serial number range; subject name; validity period; revocation status; and issue date. Marking Kerberos Service as Trusted for Delegation of User Keys Individual Identity Management services can be marked to Identity Management tools as trusted for delegation. By checking the ok_as_delegate flag, Microsoft Windows clients can determine whether the user credentials can be forwarded or delegated to a specific server or not. Samba 4.1.0 Red Hat Enterprise Linux 7 includes samba packages upgraded to the latest upstream version, which introduce several bug fixes and enhancements, the most notable of which is support for the SMB3 protocol in the server and client tools. Additionally, SMB3 transport enables encrypted transport connections to Windows servers that support SMB3, as well as Samba servers. Also, Samba 4.1.0 adds support for server-side copy operations. Clients making use of server-side copy support, such as the latest Windows releases, should experience considerable performance improvements for file copy operations. Note that using the Linux kernel CIFS module with SMB protocol 3.1.1 is currently experimental and the functionality is unavailable in kernels provided by Red Hat. Warning The updated samba packages remove several already deprecated configuration options. The most important are the server roles security = share and security = server . Also the web configuration tool SWAT has been completely removed. More details can be found in the Samba 4.0 and 4.1 release notes: https://www.samba.org/samba/history/samba-4.0.0.html https://www.samba.org/samba/history/samba-4.1.0.html Note that several tdb files have been updated. This means that all tdb files are upgraded as soon as you start the new version of the smbd daemon. You cannot downgrade to an older Samba version unless you have backups of the tdb files. For more information about these changes, refer to the Release Notes for Samba 4.0 and 4.1 mentioned above.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-authentication_and_interoperability
Chapter 5. AMD GPU Integration
Chapter 5. AMD GPU Integration You can use AMD GPUs with OpenShift AI to accelerate AI and machine learning (ML) workloads. AMD GPUs provide high-performance compute capabilities, allowing users to process large data sets, train deep neural networks, and perform complex inference tasks more efficiently. Integrating AMD GPUs with OpenShift AI involves the following components: ROCm workbench images : Use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs. These images include libraries and frameworks optimized with the AMD ROCm platform, enabling high-performance workloads for PyTorch and TensorFlow. The pre-configured images reduce setup time and provide an optimized environment for GPU-accelerated development and experimentation. AMD GPU Operator : The AMD GPU Operator simplifies GPU integration by automating driver installation, device plugin setup, and node labeling for GPU resource management. It ensures compatibility between OpenShift and AMD hardware while enabling scaling of GPU-enabled workloads. 5.1. Verifying AMD GPU availability on your cluster Before you proceed with the AMD GPU Operator installation process, you can verify the presence of an AMD GPU device on a node within your OpenShift cluster. You can use commands such as lspci or oc to confirm hardware and resource availability. Prerequisites You have administrative access to the OpenShift cluster. You have a running OpenShift cluster with a node equipped with an AMD GPU. You have access to the OpenShift CLI ( oc ) and terminal access to the node. Procedure Use the OpenShift CLI to verify if GPU resources are allocatable: List all nodes in the cluster to identify the node with an AMD GPU: Note the name of the node where you expect the AMD GPU to be present. Describe the node to check its resource allocation: In the output, locate the Capacity and Allocatable sections and confirm that amd.com/gpu is listed. For example: Check for the AMD GPU device using the lspci command: Log in to the node: Run the lspci command and search for the supported AMD device in your deployment. For example: Verify that the output includes one of the AMD GPU models. For example: Optional: Use the rocminfo command if the ROCm stack is installed on the node: Confirm that the ROCm tool outputs details about the AMD GPU, such as compute units, memory, and driver status. Verification The oc describe node <node_name> command lists amd.com/gpu under Capacity and Allocatable . The lspci command output identifies an AMD GPU as a PCI device matching one of the specified models (for example, MI210, MI250, MI300). Optional: The rocminfo tool provides detailed GPU information, confirming driver and hardware configuration. Additional resources AMD GPU Operator GitHub Repository 5.2. Enabling AMD GPUs Before you can use AMD GPUs in OpenShift AI, you must install the required dependencies, deploy the AMD GPU Operator, and configure the environment. Prerequisites You have logged in to OpenShift. You have the cluster-admin role in OpenShift. You have installed your AMD GPU and confirmed that it is detected in your environment. Your OpenShift environment supports EC2 DL1 instances if you are running on Amazon Web Services (AWS). Procedure Install the latest version of the AMD GPU Operator, as described in Install AMD GPU Operator on OpenShift . After installing the AMD GPU Operator, configure the AMD drivers required by the Operator as described in the documentation: Configure AMD drivers for the GPU Operator . Note Alternatively, you can install the AMD GPU Operator from the Red Hat Catalog. For more information, see Install AMD GPU Operator from Red Hat Catalog . After installing the AMD GPU Operator, create an accelerator profile, as described in Working with accelerator profiles . Verification From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: AMD GPU Operator Node Feature Discovery (NFD) Kernel Module Management (KMM) Note Ensure that you follow all the steps for proper driver installation and configuration. Incorrect installation or configuration may prevent the AMD GPUs from being recognized or functioning properly.
[ "get nodes", "describe node <node_name>", "Capacity: amd.com/gpu: 1 Allocatable: amd.com/gpu: 1", "debug node/<node_name> chroot /host", "lspci | grep -E \"MI210|MI250|MI300\"", "03:00.0 Display controller: Advanced Micro Devices, Inc. [AMD] Instinct MI210", "rocminfo" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_accelerators/amd-gpu-integration_accelerators
10.3.2. Saving Traceback Messages
10.3.2. Saving Traceback Messages If anaconda encounters an error during the graphical installation process, it presents you with a crash reporting dialog box: Figure 10.1. The Crash Reporting Dialog Box Details shows you the details of the error: Figure 10.2. Details of the Crash Save saves details of the error locally or remotely: Exit exits the installation process. If you select Save from the main dialog, you can choose from the following options: Figure 10.3. Select reporter Logger saves details of the error as a log file to the local hard drive at a specified location. Red Hat Customer Support submits the crash report to Customer Support for assistance. Report uploader uploads a compressed version of the crash report to Bugzilla or a URL of your choice. Before submitting the report, click Preferences to specify a destination or provide authentication details. Select the reporting method you need to configure and click Configure Event . Figure 10.4. Configure reporter preferences Logger Specify a path and a filename for the log file. Check Append if you are adding to an existing log file. Figure 10.5. Specify local path for log file Red Hat Customer Support Enter your Red Hat Network username and password so your report reaches Customer Support and is linked with your account. The URL is prefilled and Verify SSL is checked by default. Figure 10.6. Enter Red Hat Network authentication details Report uploader Specify a URL for uploading a compressed version of the crash report. Figure 10.7. Enter URL for uploading crash report Bugzilla Enter your Bugzilla username and password to lodge a bug with Red Hat's bug-tracking system using the crash report. The URL is prefilled and Verify SSL is checked by default. Figure 10.8. Enter Bugzilla authentication details Once you have entered your preferences, click OK to return to the report selection dialog. Select how you would like to report the problem and then click Forward . Figure 10.9. Confirm report data You can now customize the report by checking and unchecking the issues that will be included. When finished, click Apply . Figure 10.10. Report in progress This screen displays the outcome of the report, including any errors in sending or saving the log. Click Forward to proceed. Figure 10.11. Reporting done Reporting is now complete. Click Forward to return to the report selection dialog. You can now make another report, or click Close to exit the reporting utility and then Exit to close the installation process.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-trouble-tracebacks-x86
Chapter 24. Load balancing on RHOSP
Chapter 24. Load balancing on RHOSP 24.1. Limitations of load balancer services OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) use Octavia to handle load balancer services. As a result of this choice, such clusters have a number of functional limitations. RHOSP Octavia has two supported providers: Amphora and OVN. These providers differ in terms of available features as well as implementation details. These distinctions affect load balancer services that are created on your cluster. 24.1.1. Local external traffic policies You can set the external traffic policy (ETP) parameter, .spec.externalTrafficPolicy , on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider. Having the ETP option set to Local requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that does not have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the create-monitor option in the cloud provider configuration to true . In RHOSP 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported. In RHOSP 16.2, the Amphora Octavia provider does not support HTTP monitors on UDP pools. As a result, UDP load balancer services have UDP-CONNECT monitors created instead. Due to implementation details, this configuration only functions properly with the OVN-Kubernetes CNI plugin. 24.2. Scaling clusters for application traffic by using Octavia OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create. You must create your own Octavia load balancer to use it for application network scaling. 24.2.1. Scaling clusters by using Octavia If you want to use multiple API load balancers, create an Octavia load balancer and then configure your cluster to use it. Prerequisites Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure From a command line, create an Octavia load balancer that uses the Amphora driver: USD openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet> You can use a name of your choice instead of API_OCP_CLUSTER . After the load balancer becomes active, create listeners: USD openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER Note To view the status of the load balancer, enter openstack loadbalancer list . Create a pool that uses the round robin algorithm and has session persistence enabled: USD openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS To ensure that control plane machines are available, create a health monitor: USD openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443 Add the control plane machines as members of the load balancer pool: USD for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done Optional: To reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP Your cluster now uses Octavia for load balancing. 24.3. Services for a user-managed load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use a user-managed load balancer in place of the default load balancer. Important Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for a user-managed load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 24.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 24.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 24.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for user-managed load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 24.3.1. Configuring a user-managed load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use a user-managed load balancer in place of the default load balancer. Important Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. Note MetalLB, which runs on a cluster, functions as a user-managed load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples show health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration. Example HAProxy configuration with one listed subnet # ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Example HAProxy configuration with multiple listed subnets # ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... Use the curl CLI command to verify that the user-managed load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's install-config.yaml file: # ... platform: openstack: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ... 1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault , which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services. 2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. 3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Verification Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 24.4. Specifying a floating IP address in the Ingress Controller By default, a floating IP address gets randomly assigned to your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) upon deployment. This floating IP address is associated with your Ingress port. You might want to pre-create a floating IP address before updating your DNS records and cluster deployment. In this situation, you can define a floating IP address to the Ingress Controller. You can do this regardless of whether you are using Octavia or a user-managed cluster. Procedure Create the Ingress Controller custom resource (CR) file with the floating IPs: Example Ingress config sample-ingress.yaml apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: OpenStack openstack: floatingIP: <ingress_port_IP> 4 1 The name of your Ingress Controller. If you are using the default Ingress Controller, the value for this field is default . 2 The DNS name serviced by the Ingress Controller. 3 You must set the scope to External to use a floating IP address. 4 The floating IP address associated with the port your Ingress Controller is listening on. Apply the CR file by running the following command: USD oc apply -f sample-ingress.yaml Update your DNS records with the Ingress Controller endpoint: *.apps.<name>.<domain>. IN A <ingress_port_IP> Continue with creating your OpenShift Container Platform cluster. Verification Confirm that the load balancer was successfully provisioned by checking the IngressController conditions using the following command: USD oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
[ "openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>", "openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER", "openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS", "openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443", "for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "platform: openstack: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: OpenStack openstack: floatingIP: <ingress_port_IP> 4", "oc apply -f sample-ingress.yaml", "*.apps.<name>.<domain>. IN A <ingress_port_IP>", "oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath=\"{.status.conditions}\" | yq -PC" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/load-balancing-openstack
Chapter 10. GenericSecretSource schema reference
Chapter 10. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationCustom , KafkaListenerAuthenticationOAuth Property Description key The key under which the secret value is stored in the OpenShift Secret. string secretName The name of the OpenShift Secret containing the secret value. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-genericsecretsource-reference
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so: Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/proc_providing-feedback-on-red-hat-documentation
Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads
Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads Red Hat OpenShift Data Foundation 4.18 The OpenShift Data Foundation Disaster Recovery capabilities for Metropolitan and Regional regions is now General Available which also includes Disaster Recovery with stretch cluster. Red Hat Storage Documentation Team Abstract The intent of this solution guide is to detail the steps necessary to deploy OpenShift Data Foundation for disaster recovery with Advanced Cluster Management and stretch cluster to achieve a highly available storage infrastructure. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery Disaster recovery (DR) is the ability to recover and continue business critical applications from natural or human created disasters. It is a component of the overall business continuance strategy of any major organization as designed to preserve the continuity of business operations during major adverse events. The OpenShift Data Foundation DR capability enables DR across multiple Red Hat OpenShift Container Platform clusters, and is categorized as follows: Metro-DR Metro-DR ensures business continuity during the unavailability of a data center with no data loss. In the public cloud these would be similar to protecting from an Availability Zone failure. Regional-DR Regional-DR ensures business continuity during the unavailability of a geographical region, accepting some loss of data in a predictable amount. In the public cloud this would be similar to protecting from a region failure. Disaster Recovery with stretch cluster Stretch cluster solution ensures business continuity with no-data loss disaster recovery protection with OpenShift Data Foundation based synchronous replication in a single OpenShift cluster, stretched across two data centers with low latency and one arbiter node. Zone failure in Metro-DR and region failure in Regional-DR is usually expressed using the terms, Recovery Point Objective (RPO) and Recovery Time Objective (RTO) . RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage. RTO is the amount of downtime a business can tolerate. The RTO answers the question, "How long can it take for our system to recover after we are notified of a business disruption?" The intent of this guide is to detail the Disaster Recovery steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then relocate the same application to the original primary cluster. Chapter 2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important OpenShift Data Foundation deployed with Multus networking is not supported for Regional Disaster Recovery (Regional-DR) setups. Chapter 3. Metro-DR solution for OpenShift Data Foundation This section of the guide provides details of the Metro Disaster Recovery (Metro-DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the OpenShift Container Platform clusters of less than 10ms RTT latency. The persistent storage for applications is provided by an external Red Hat Ceph Storage (RHCS) cluster stretched between the two locations with the OpenShift Container Platform instances connected to this storage cluster. An arbiter node with a storage monitor service is required at a third location (different location than where OpenShift Container Platform instances are deployed) to establish quorum for the RHCS cluster in the case of a site outage. This third location can be in the range of ~100ms RTT from the storage cluster connected to the OpenShift Container Platform instances. This is a general overview of the Metro DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . 3.1. Components of Metro-DR solution Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane. Managed clusters: components that run on the clusters that are managed. For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . Red Hat Ceph Storage Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments. For more product information, see Red Hat Ceph Storage . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift DR OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships. OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 3.2. Metro-DR deployment workflow This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using the latest versions of Red Hat OpenShift Data Foundation, Red Hat Ceph Storage (RHCS) and Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.10 or later, across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management. To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Metro-DR . Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage . Deploy and configure Red Hat Ceph Storage stretch mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Deploying Red Hat Ceph Storage . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note The Metro-DR solution can only have one DRpolicy. Testing your disaster recovery solution with: Subscription-based application: Create sample applications. See Creating sample application . Test failover and relocate operations using the sample application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . Discovered applications Ensure all requirements mentioned in Prerequisites is addressed. See Prerequisites for disaster recovery protection of discovered applications Create a sample discovered application. See Creating a sample discovered application Enroll the discovered application. See Enrolling a sample discovered application for disaster recovery protection Test failover and relocate. See Discovered application failover and relocate 3.3. Requirements for enabling Metro-DR The prerequisites to installing a disaster recovery solution supported by Red Hat OpenShift Data Foundation are as follows: You must have the following OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management (RHACM) for Kubernetes operator are installed. Primary managed cluster where OpenShift Data Foundation is running. Secondary managed cluster where OpenShift Data Foundation is running. Note For configuring hub recovery setup, you need a 4th cluster which acts as the passive hub. The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For more information, see Configuring passive hub cluster for hub recovery . Hub recovery is a Technology Preview feature and is subject to Technology Preview support limitations. Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important Ensure that application traffic routing and redirection are configured appropriately. On the Hub cluster Navigate to All Clusters -> Infrastructure -> Clusters . Import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Warning The Openshift Container Platform managed clusters and the Red Hat Ceph Storage (RHCS) nodes have distance limitations. The network latency between the sites must be below 10 milliseconds round-trip time (RTT). 3.4. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it. This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for Red Hat Ceph Storage 7 . Note Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss. Important Erasure coded pools cannot be used with stretch mode. 3.4.1. Hardware requirements For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph . Table 3.1. Physical server locations and Ceph component layout for Red Hat Ceph Storage cluster deployment: Node name Datacenter Ceph components ceph1 DC1 OSD+MON+MGR ceph2 DC1 OSD+MON ceph3 DC1 OSD+MDS+RGW ceph4 DC2 OSD+MON+MGR ceph5 DC2 OSD+MON ceph6 DC2 OSD+MDS+RGW ceph7 DC3 MON 3.4.2. Software requirements Use the latest software version of Red Hat Ceph Storage 7 . For more information on the supported Operating System versions for Red Hat Ceph Storage, see knowledgebase article on Red Hat Ceph Storage: Supported configurations . 3.4.3. Network configuration requirements The recommended Red Hat Ceph Storage configuration is as follows: You must have two separate networks, one public network and one private network. You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters. Note You can use different subnets for each of the datacenters. The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high up to 100 ms RTT to the other two OSD datacenters. Here is an example of a basic network configuration that we have used in this guide: DC1: Ceph public/private network: 10.0.40.0/24 DC2: Ceph public/private network: 10.0.40.0/24 DC3: Ceph public/private network: 10.0.40.0/24 For more information on the required network environment, see Ceph network configuration . 3.5. Deploying Red Hat Ceph Storage 3.5.1. Node pre-deployment steps Before installing the Red Hat Ceph Storage Ceph cluster, perform the following steps to fulfill all the requirements needed. Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool: subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0 Enable access for all the nodes in the Ceph cluster for the following repositories: rhel9-for-x86_64-baseos-rpms rhel9-for-x86_64-appstream-rpms subscription-manager repos --disable="*" --enable="rhel9-for-x86_64-baseos-rpms" --enable="rhel9-for-x86_64-appstream-rpms" Update the operating system RPMs to the latest version and reboot if needed: dnf update -y reboot Select a node from the cluster to be your bootstrap node. ceph1 is our bootstrap node in this example going forward. Only on the bootstrap node ceph1 , enable the ansible-2.9-for-rhel-9-x86_64-rpms and rhceph-6-tools-for-rhel-9-x86_64-rpms repositories: subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-6-tools-for-rhel-9-x86_64-rpms" Configure the hostname using the bare/short hostname in all the hosts. hostnamectl set-hostname <short_name> Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm. USD hostname Example output: Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name. Check the long hostname with the fqdn using the hostname -f option. USD hostname -f Example output: Note To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names . Run the following steps on the bootstrap node. In our example, the bootstrap node is ceph1 . Install the cephadm-ansible RPM package: USD sudo dnf install -y cephadm-ansible Important To run the ansible playbooks, you must have ssh passwordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example, deployment-user ) has root privileges to invoke the sudo command without needing a password. To use a custom key, configure the selected user (for example, deployment-user ) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh: cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF Build the ansible inventory cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF Note Here, the Hosts ( Ceph1 and Ceph4 ) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as _admin by cephadm . Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node. Verify that ansible can access all nodes using the ping module before running the pre-flight playbook. USD ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b Example output: Navigate to the /usr/share/cephadm-ansible directory. Run ansible-playbook with relative file paths. USD ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" The preflight playbook Ansible playbook configures the RHCS dnf repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . For additional information, see Running the preflight playbook 3.5.2. Cluster bootstrapping and service deployment with cephadm utility The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run. In this guide we are going to bootstrap the cluster and deploy all the needed Red Hat Ceph Storage services in one step using a cluster specification yaml file. If you find issues during the deployment, it may be easier to troubleshoot the errors by dividing the deployment into two steps: Bootstrap Service deployment Note For additional information on the bootstrapping process, see Bootstrapping a new storage cluster . Procedure Create json file to authenticate against the container registry using a json file as follows: USD cat <<EOF > /root/registry.json { "url":"registry.redhat.io", "username":"User", "password":"Pass" } EOF Create a cluster-spec.yaml that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run following table 3.1. cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: "mon" --- service_type: mds service_id: cephfs placement: label: "mds" --- service_type: mgr service_name: mgr placement: label: "mgr" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: "osd" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: "rgw" spec: rgw_frontend_port: 8080 EOF Retrieve the IP for the NIC with the Red Hat Ceph Storage public network configured from the bootstrap node. After substituting 10.0.40.0 with the subnet that you have defined in your ceph public network, execute the following command. USD ip a | grep 10.0.40 Example output: Run the cephadm bootstrap command as the root user on the node that will be the initial Monitor node in the cluster. The IP_ADDRESS option is the node's IP address that you are using to run the cephadm bootstrap command. Note If you have configured a different user instead of root for passwordless SSH access, then use the --ssh-user= flag with the cepadm bootstrap command. If you are using non default/id_rsa ssh key names, then use --ssh-private-key and --ssh-public-key options with cephadm command. USD cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json Important If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Once the bootstrap finishes, you will see the following output from the cephadm bootstrap command: You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/ Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1: USD ceph -s Example output: Note It may take several minutes for all the services to start. It is normal to get a global recovery event while you do not have any OSDs configured. You can use ceph orch ps and ceph orch ls to further check the status of the services. Verify if all the nodes are part of the cephadm cluster. USD ceph orch host ls Example output: Note You can run Ceph commands directly from the host because ceph1 was configured in the cephadm-ansible inventory as part of the [admin] group. The Ceph admin keys were copied to the host during the cephadm bootstrap process. Check the current placement of the Ceph monitor services on the datacenters. USD ceph orch ps | grep mon | awk '{print USD1 " " USD2}' Example output: Check the current placement of the Ceph manager services on the datacenters. Example output: Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is UP . Also, double-check that each node is under the right datacenter bucket as specified in table 3.1 USD ceph osd tree Example output: Create and enable a new RDB block pool. Note The number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator . Verify that the RBD pool has been created. Example output: Verify that MDS services are active and have located one service on each datacenter. Example output: Create the CephFS volume. USD ceph fs volume create cephfs Note The ceph fs volume create command also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems . Check the Ceph status to verify how the MDS daemons have been deployed. Ensure that the state is active where ceph6 is the primary MDS for this filesystem and ceph3 is the secondary MDS. USD ceph fs status Example output: Verify that RGW services are active. USD ceph orch ps | grep rgw Example output: 3.5.3. Configuring Red Hat Ceph Storage stretch mode Once the Red Hat Ceph Storage cluster is fully deployed using cephadm , use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case. Procedure Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic. ceph mon dump | grep election_strategy Example output: Change the monitor election to connectivity. ceph mon set election_strategy connectivity Run the ceph mon dump command again to verify the election_strategy value. USD ceph mon dump | grep election_strategy Example output: To know more about the different election strategies, see Configuring monitor election strategy . Set the location for all our Ceph monitors: ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3 Verify that each monitor has its appropriate location. USD ceph mon dump Example output: Create a CRUSH rule that makes use of this OSD crush topology by installing the ceph-base RPM package in order to use the crushtool command: USD dnf -y install ceph-base To know more about CRUSH ruleset, see Ceph CRUSH ruleset . Get the compiled CRUSH map from the cluster: USD ceph osd getcrushmap > /etc/ceph/crushmap.bin Decompile the CRUSH map and convert it to a text file in order to be able to edit it: USD crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt Add the following rule to the CRUSH map by editing the text file /etc/ceph/crushmap.txt at the end of the file. USD vim /etc/ceph/crushmap.txt This example is applicable for active applications in both OpenShift Container Platform clusters. Note The rule id has to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the free id. The CRUSH rule declared contains the following information: Rule name Description: A unique whole name for identifying the rule. Value: stretch_rule id Description: A unique whole number for identifying the rule. Value: 1 type Description: Describes a rule for either a storage drive replicated or erasure-coded. Value: replicated min_size Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule. Value: 1 max_size Description: If a pool makes more replicas than this number, CRUSH will not select this rule. Value: 10 step take default Description: Takes the root bucket called default , and begins iterating down the tree. step choose firstn 0 type datacenter Description: Selects the datacenter bucket, and goes into its subtrees. step chooseleaf firstn 2 type host Description: Selects the number of buckets of the given type. In this case, it is two different hosts located in the datacenter it entered at the level. step emit Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule. Compile the new CRUSH map from the file /etc/ceph/crushmap.txt and convert it to a binary file called /etc/ceph/crushmap2.bin : USD crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin Inject the new crushmap we created back into the cluster: USD ceph osd setcrushmap -i /etc/ceph/crushmap2.bin Example output: Note The number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map. Verify that the stretched rule created is now available for use. ceph osd crush rule ls Example output: Enable the stretch cluster mode. USD ceph mon enable_stretch_mode ceph7 stretch_rule datacenter In this example, ceph7 is the arbiter node, stretch_rule is the crush rule we created in the step and datacenter is the dividing bucket. Verify all our pools are using the stretch_rule CRUSH rule we have created in our Ceph cluster: USD for pool in USD(rados lspools);do echo -n "Pool: USD{pool}; ";ceph osd pool get USD{pool} crush_rule;done Example output: This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available. 3.6. Installing OpenShift Data Foundation on managed clusters To configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation operator must be installed first on each managed cluster. Prerequisites Ensure that you have met the hardware requirements for OpenShift Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements . Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. After installing the operator, create a StorageSystem using the option Full deployment type and Connect with external storage platform where your Backing storage type is Red Hat Ceph Storage . For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Use the following flags with the ceph-external-cluster-details-exporter.py script. At a minimum, you must use the following three flags with the ceph-external-cluster-details-exporter.py script : --rbd-data-pool-name With the name of the RBD pool that was created during RHCS deployment for OpenShift Container Platform. For example, the pool can be called rbdpool . --rgw-endpoint Provide the endpoint in the format <ip_address>:<port> . It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. --run-as-user With a different client name for each site. The following flags are optional if default values were used during the RHCS deployment: --cephfs-filesystem-name With the name of the CephFS filesystem we created during RHCS deployment for OpenShift Container Platform, the default filesystem name is cephfs . --cephfs-data-pool-name With the name of the CephFS data pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.data . --cephfs-metadata-pool-name With the name of the CephFS metadata pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.meta . Run the following command on the bootstrap node ceph1 , to get the IP for the RGW endpoints in datacenter1 and datacenter2: Example output: Example output: Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster1 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster2 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Save the two files generated in the bootstrap cluster (ceph1) ocp-cluster1.json and ocp-cluster2.json to your local machine. Use the contents of file ocp-cluster1.json on the OpenShift Container Platform console on cluster1 where external OpenShift Data Foundation is being deployed. Use the contents of file ocp-cluster2.json on the OpenShift Container Platform console on cluster2 where external OpenShift Data Foundation is being deployed. Review the settings and then select Create StorageSystem . Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): Wait for the status result to be Ready for both queries on the Primary managed cluster and the Secondary managed cluster . On the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-external-storagecluster-storagesystem -> Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. Enable read affinity for RBD and CephFS volumes to be served from the nearest datacenter. On the Primary managed cluster, label all the nodes. Execute the following commands to enable read affinity: On the Secondary managed cluster, label all the nodes: Execute the following commands to enable read affinity: 3.7. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 3.8. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 3.9. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters -> Data Services -> Disaster recovery . On the Overview tab, click Create a disaster recovery policy or you can navigate to Policies tab and click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4perf1-ocp4perf2 ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Replication policy is automatically set to sync based on the OpenShift clusters selected. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run commands for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster. Verify that the secret is propagated correctly on the Primary managed cluster and the Secondary managed cluster. Match the output with the s3SecretRef from the Hub cluster: 3.10. Configure DRClusters for fencing automation This configuration is required for enabling fencing prior to application failover. In order to prevent writes to the persistent volume from the cluster which is hit by a disaster, OpenShift DR instructs Red Hat Ceph Storage (RHCS) to fence the nodes of the cluster from the RHCS external storage. This section guides you on how to add the IPs or the IP Ranges for the nodes of the DRCluster. 3.10.1. Add node IP addresses to DRClusters Find the IP addresses for all of the OpenShift nodes in the managed clusters by running this command in the Primary managed cluster and the Secondary managed cluster . Example output: Once you have the IP addresses then the DRCluster resources can be modified for each managed cluster. Find the DRCluster names on the Hub Cluster. Example output: Edit each DRCluster to add your unique IP addresses after replacing <drcluster_name> with your unique name. Example output: Note There could be more than six IP addresses. Modify this DRCluster configuration also for IP addresses on the Secondary managed clusters in the peer DRCluster resource (e.g., ocp4perf2). 3.10.2. Add fencing annotations to DRClusters Add the following annotations to all the DRCluster resources. These annotations include details needed for the NetworkFence resource created later in these instructions (prior to testing application failover). Note Replace <drcluster_name> with your unique name. Example output: Make sure to add these annotations for both DRCluster resources (for example: ocp4perf1 and ocp4perf2 ). 3.11. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 3.11.1. Subscription-based applications 3.11.1.1. Creating a sample Subscription-based application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions that belong together, refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they are also DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples where the Branch is release-4.17 and Path is busybox-odr-metro . Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Log in to your managed cluster where busybox was deployed by RHACM. Example output: 3.11.1.2. Apply Data policy to sample application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters -> Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy -> Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. You can also use the Add application resource option to add multiple resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Click View more details to view the status of ongoing activities with the policy in use with the application. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.2. ApplicationSet-based applications 3.11.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on all three clusters: Hub cluster , Primary managed cluster and Secondary managed cluster . For instructions, see Installing Red Hat OpenShift GitOps Operator in web console . On the Hub cluster, ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Ensure that you have created the ClusterRoleBinding yaml on both the Primary and Secondary managed clusters. For instruction, see the Prerequisites chapter in RHACM documentation . Procedure On the Hub cluster, navigate to All Clusters -> Applications and click Create application . Choose the application type as Argo CD ApplicationSet - Pull model . In the General step, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.17 Choose Path as busybox-odr-metro . Enter Remote namespace value. (example, busybox-sample) and click . Choose the Sync policy settings as per your requirement or go with the default selections, and then click . You can choose one or more options. In Label expressions, add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 3.11.2.2. Apply Data policy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters -> Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy -> Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.3. Deleting sample application This section provides instructions for deleting the sample application busybox using the RHACM console. Important When deleting a DR protected application, access to both clusters that belong to the DRPolicy is required. This is to ensure that all protected API resources and resources in the respective S3 stores are cleaned up as part of removing the DR protection. If access to one of the clusters is not healthy, deleting the DRPlacementControl resource for the application, on the hub, would remain in the Deleting state. Prerequisites These instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, delete the DRPlacementControl if it is not auto-deleted after deleting the busybox application. Log in to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . For ApplicationSet applications, select the project as openshift-gitops . Click OpenShift DR Hub Operator and then click the DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 3.12. Subscription-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox application is now failing over to the Secondary-managed cluster . Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.13. ApplicationSet-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 3.14. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage -> Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.15. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage -> Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application. 3.16. Disaster recovery protection for discovered applications Red Hat OpenShift Data Foundation now provides disaster recovery (DR) protection and support for workloads that are deployed in one of the managed clusters directly without using Red Hat Advanced Cluster Management (RHACM). These workloads are called discovered applications. The workloads that are deployed using RHACM are now called managed applications. When a workload is deployed directly on one of the managed clusters without using RHACM, then those workloads are called discovered applications. Though these workload details can be seen on the RHACM console, the application lifecycle (create, delete, edit) is not managed by RHACM. 3.16.1. Prerequisites for disaster recovery protection of discovered applications This section provides instructions to guide you through the prerequisites for protecting discovered applications. This includes tasks such as assigning a data policy and initiating DR actions such as failover and relocate. Ensure that all the DR configurations have been installed on the Primary managed cluster and the Secondary managed cluster. Install the OADP 1.4 operator. Note Any version before OADP 1.4 will not work for protecting discovered applications. On the Primary and Secondary managed cluster , navigate to OperatorHub and use the keyword filter to search for OADP . Click the OADP tile. Keep all default settings and click Install . Ensure that the operator resources are installed in the openshift-adp project. Note If OADP 1.4 is installed after DR configuration has been completed then the ramen-dr-cluster-operator pods on the Primary managed cluster and the Secondary managed cluster in namespace openshift-dr-system must be restarted (deleted and recreated). [Optional] Add CACertificates to ramen-hub-operator-config ConfigMap . Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. If you are using self-signed certificates, then you have already created a ConfigMap named user-ca-bundle in the openshift-config namespace and added this ConfigMap to the default Proxy cluster resource. Find the encoded value for the CACertificates. Add this base64 encoded value to the configmap ramen-hub-operator-config on the Hub cluster. Example below shows where to add CACertificates. Verify that there are DR secrets created in the OADP operator default namespace openshift-adp on the Primary managed cluster and the Secondary managed cluster . The DR secrets that were created when the first DRPolicy was created, will be similar to the secrets below. The DR secret name is preceded with the letter v . Note There will be one DR created secret for each managed cluster in the openshift-adp namespace. Verify if the Data Protection Application (DPA) is already installed on each managed cluster in the OADP namespace openshift-adp . If not already created then follow the step to create this resource. Create the DPA by copying the following YAML definition content to dpa.yaml . Create the DPA resource. Verify that the OADP resources are created and are in Running state. 3.16.2. Creating a sample discovered application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate for discovered applications, you need a sample application that is installed without using the RHACM create application capability. Procedure Log in to the Primary managed cluster and clone the sample application repository. Verify that you are on the main branch. The correct directory should be used when creating the sample application based on your scenario, metro or regional. Note Only applications using CephRBD or block volumes are supported for discovered applications. Create a project named busybox-discovered on both the Primary and Secondary managed clusters . Create the busybox application on the Primary managed cluster . This sample application example is for Metro-DR using a block (Ceph RBD) volume. Note OpenShift Data Foundation Disaster Recovery solution now extends protection to discovered applications that span across multiple namespaces. Verify that busybox is running in the correct project on the Primary managed cluster . 3.16.3. Enrolling a sample discovered application for disaster recovery protection This section guides you on how to apply an existing DR Policy to a discovered application from the Protected applications tab. Prerequisites Ensure that Disaster Recovery has been configured and that at least one DR Policy has been created. Procedure On RHACM console, navigate to Disaster recovery -> Protected applications tab. Click Enroll application to start configuring existing applications for DR protection. Select ACM discovered applications . In the Namespace page, choose the DR cluster which is the name of the Primary managed cluster where busybox is installed. Select namespace where the application is installed. For example, busybox-discovered . Note If you have workload spread across multiple namespaces then you can select all of those namespaces to DR protect. Choose a unique Name , for example busybox-rbd , for the discovered application and click . In the Configuration page, select either Resource label or Recipe . Resource label is used to protect your resources where you can set which resources will be included in the kubernetes-object backup and what volume's persistent data will be replicated. If you selected Resource label , provide label expressions and PVC label selector. Choose the label appname=busybox for both the kubernetes-objects and for the PVC(s) . If you selected Recipe , then from the Recipe list select the name of the recipe. Important The recipe resource must be created in the application namespace on both managed clusters before enrolling an application for disaster recovery. Click . In the Replication page, select an existing DR Policy and the kubernetes-objects backup interval . Note It is recommended to choose the same duration for the PVC data replication and kubernetes-object backup interval (i.e., 5 minutes). Click . Review the configuration and click Save . Use the Back button to go back to the screen to correct any issues. Verify that the Application volumes (PVCs) and the Kubernetes-objects backup have a Healthy status before proceeding to DR Failover and Relocate testing. You can view the status of your Discovered applications on the Protected applications tab. To see the status of the DRPC, run the following command on the Hub cluster: The discovered applications store resources such as DRPlacementControl (DRPC) and Placement on the Hub cluster in a new namespace called openshift-dr-ops . The DRPC name can be identified by the unique Name configured in prior steps (i.e., busybox-rbd ). To see the status of the VolumeReplicationGroup (VRG) for discovered applications, run the following command on the managed cluster where the busybox application was manually installed. The VRG resource is stored in the namespace openshift-dr-ops after a DR Policy is assigned to the discovered application. The VRG name can be identified by the unique Name configured in prior steps (i.e., busybox-rbd ). 3.16.4. Discovered application failover and relocate A protected Discovered application can Failover or Relocate to its peer cluster similar to managed applications . However, there are some additional steps for discovered applications since RHACM does not manage the lifecycle of the application as it does for Managed applications. This section guides you through the Failover and Relocate process for a protected discovered application. Important Never initiate a Failover or Relocate of an application when one or both resource types are in a Warning or Critical status. 3.16.4.1. Failover disaster recovery protected discovered application This section guides you on how to failover a discovered application which is disaster recovery protected. Prerequisites Ensure that the application namespace is created in both managed clusters (for example, busybox-discovered ). Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output In the RHACM console, navigate to Disaster Recovery -> Protected applications tab. At the end of the application row, click on the Actions menu and choose to initiate Failover . In the Failover application modal window, review the status of the application and the target cluster. Click Initiate . Wait for the Failover process to complete. Verify that the busybox application is running on the Secondary managed cluster . Check the progression status of Failover until the result is WaitOnUserToCleanup . The DRPC name can be identified by the unique Name configured in prior steps (for example, busybox-rbd ). Remove the busybox application from the Primary managed cluster to complete the Failover process. Navigate to the Protected applications tab. You will see a message to remove the application. Navigate to the cloned repository for busybox and run the following commands on the Primary managed cluster where you failed over from. Use the same directory that was used to create the application (for example, odr-metro-rbd ). After deleting the application, navigate to the Protected applications tab and verify that the busybox resources are both in Healthy status. 3.16.4.2. Relocate disaster recovery protected discovered application This section guides you on how to relocate a discovered application which is disaster recovery protected. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage -> Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Login to your Ceph cluster and verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. In the RHACM console, navigate to Disaster Recovery -> Protected applications tab. At the end of the application row, click on the Actions menu and choose to initiate Relocate . In the Relocate application modal window, review the status of the application and the target cluster. Click Initiate . Check the progression status of Relocate until the result is WaitOnUserToCleanup . The DRPC name can be identified by the unique Name configured in prior steps (for example, busybox-rbd ). Remove the busybox application from the Secondary managed cluster before Relocate to the Primary managed cluster is completed. Navigate to the cloned repository for busybox and run the following commands on the Secondary managed cluster where you relocated from. Use the same directory that was used to create the application (for example, odr-metro-rbd ). After deleting the application, navigate to the Protected applications tab and verify that the busybox resources are both in Healthy status. Verify that the busybox application is running on the Primary managed cluster . 3.16.5. Disable disaster recovery for protected applications This section guides you to disable disaster recovery resources when you want to delete the protected applications or when the application no longer needs to be protected. Procedure Login to the Hub cluster. List the DRPlacementControl (DRPC) resources. Each DRPC resource was created when the application was assigned a DR policy. Find the DRPC that has a name that includes the unique identifier that you chose when assigning a DR policy (for example, busybox-rbd ) and delete the DRPC. List the Placement resources. Each Placement resource was created when the application was assigned a DR policy. Find the Placement that has a name that includes the unique identifier that you chose when assigning a DR policy (for example, busybox-rbd-placement-1 ) and delete the Placement . 3.17. Recovering to a replacement cluster with Metro-DR When there is a failure with the primary cluster, you get the options to either repair, wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. This solution guides you when replacing a failed primary cluster with a new cluster and enables failback (relocate) to this new cluster. In these instructions, we are assuming that a RHACM managed cluster must be replaced after the applications have been installed and protected. For purposes of this section, the RHACM managed cluster is the replacement cluster , while the cluster that is not replaced is the surviving cluster and the new cluster is the recovery cluster . Note Replacement cluster recovery for Discovered applications is currently not supported. Only Managed applications are supported. Prerequisite Ensure that the Metro-DR environment has been configured with applications installed using Red Hat Advance Cluster Management (RHACM). Ensure that the applications are assigned a Data policy which protects them against cluster failure. Procedure Perform the following steps on the Hub cluster : Fence the replacement cluster by using the CLI terminal to edit the DRCluster resource, where <drcluster_name> is the replacement cluster name. Using the RHACM console, navigate to Applications and failover all protected applications from the failed cluster to the surviving cluster. Verify and ensure that all protected applications are now running on the surviving cluster. Note The PROGRESSION state for each application DRPlacementControl will show as Cleaning Up . This is expected if the replacement cluster is offline or down. Unfence the replacement cluster. Using the CLI terminal, edit the DRCluster resource, where <drcluster_name> is the replacement cluster name. Delete the DRCluster for the replacement cluster. Note Use --wait=false since the DRCluster will not be deleted until a later step. Disable disaster recovery on the Hub cluster for each protected application on the surviving cluster. For each application, edit the Placement and ensure that the surviving cluster is selected. Note For Subscription-based applications the associated Placement can be found in the same namespace on the hub cluster similar to the managed clusters. For ApplicationSets-based applications the associated Placement can be found in the openshift-gitops namespace on the hub cluster. Verify that the s3Profile is removed for the replacement cluster by running the following command on the surviving cluster for each protected application's VolumeReplicationGroup. After the protected application Placement resources are all configured to use the surviving cluster and replacement cluster s3Profile(s) removed from protected applications, all DRPlacementControl resources must be deleted from the Hub cluster . Note For Subscription-based applications the associated DRPlacementControl can be found in the same namespace as the managed clusters on the hub cluster. For ApplicationSets-based applications the associated DRPlacementControl can be found in the openshift-gitops namespace on the hub cluster. Verify that all DRPlacementControl resources are deleted before proceeding to the step. This command is a query across all namespaces. There should be no resources found. The last step is to edit each applications Placement and remove the annotation cluster.open-cluster-management.io/experimental-scheduling-disable: "true" . Repeat the process detailed in the last step and the sub-steps for every protected application on the surviving cluster. Disabling DR for protected applications is now completed. On the Hub cluster, run the following script to remove all disaster recovery configurations from the surviving cluster and the hub cluster . Note This script used the command oc delete project openshift-operators to remove the Disaster Recovery (DR) operators in this namespace on the hub cluster. If there are other non-DR operators in this namespace, you must install them again from OperatorHub. After the namespace openshift-operators is automatically created again, add the monitoring label back for collecting the disaster recovery metrics. On the surviving cluster, ensure that the object bucket created during the DR installation is deleted. Delete the object bucket if it was not removed by script. The name of the object bucket used for DR starts with odrbucket . On the RHACM console, navigate to Infrastructure -> Clusters view . Detach the replacement cluster. Create a new OpenShift cluster (recovery cluster) and import the new cluster into the RHACM console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Install OpenShift Data Foundation operator on the recovery cluster and connect it to the same external Ceph storage system as the surviving cluster. For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Note Ensure that the OpenShift Data Foundation version is 4.15 (or greater) and the same version of OpenShift Data Foundation is on the surviving cluster. On the hub cluster, install the ODF Multicluster Orchestrator operator from OperatorHub. For instructions, see chapter on Installing OpenShift Data Foundation Multicluster Orchestrator operator . Using the RHACM console, navigate to Data Services -> Data policies . Select Create DRPolicy and name your policy. Select the recovery cluster and the surviving cluster . Create the policy. For instructions see chapter on Creating Disaster Recovery Policy on Hub cluster . Proceed to the step only after the status of DRPolicy changes to Validated . Apply the DRPolicy to the applications on the surviving cluster that were originally protected before the replacement cluster failed. Relocate the newly protected applications on the surviving cluster back to the new recovery (primary) cluster. Using the RHACM console, navigate to the Applications menu to perform the relocation. 3.18. Hub recovery using Red Hat Advanced Cluster Management [Technology preview] When your setup has active and passive Red Hat Advanced Cluster Management for Kubernetes (RHACM) hub clusters, and in case where the active hub is down, you can use the passive hub to failover or relocate the disaster recovery protected workloads. Important Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.18.1. Configuring passive hub cluster To perform hub recovery in case the active hub is down or unreachable, follow the procedure in this section to configure the passive hub cluster and then failover or relocate the disaster recovery protected workloads. Procedure Ensure that RHACM operator and MultiClusterHub is installed on the passive hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Before hub recovery, configure backup and restore. See Backup and restore topics of RHACM Business continuity guide. Install the multicluster orchestrator (MCO) operator along with Red Hat OpenShift GitOps operator on the passive RHACM hub prior to the restore. For instructions to restore your RHACM hub, see Installing OpenShift Data Foundation Multicluster Orchestrator operator . Ensure that .spec.cleanupBeforeRestore is set to None for the Restore.cluster.open-cluster-management.io resource. For details, see Restoring passive resources while checking for backups chapter of RHACM documentation. If SSL access across clusters was configured manually during setup, then re-configure SSL access across clusters. For instructions, see Configuring SSL access across clusters chapter. On the passive hub, add label to openshift-operators namespace to enable basic monitoring of VolumeSyncronizationDelay alert using this command. For alert details, see Disaster recovery alerts chapter. 3.18.2. Switching to passive hub cluster Use this procedure when active hub is down or unreachable. Procedure During the restore procedure, to avoid eviction of resources when ManifestWorks are not regenerated correctly, you can enlarge the AppliedManifestWork eviction grace period. On the passive hub cluster, check for existing global KlusterletConfig . If global KlusterletConfig exists then edit and set the value for appliedManifestWorkEvictionGracePeriod parameter to a larger value. For example, 24 hours or more. If global KlusterletConfig does not exist, then create the Klusterletconfig using the following yaml: The configuration will be propagated to all the managed clusters automatically. Restore the backups on the passive hub cluster. For information, see Restoring a hub cluster from backup. Important Recovering a failed hub to its passive instance will only restore applications and their DR protected state to its last scheduled backup. Any application that was DR protected after the last scheduled backup would need to be protected again on the new hub. Verify that the restore is complete. Verify that the Primary and Secondary managed clusters are successfully imported into the RHACM console and they are accessible. If any of the managed clusters are down or unreachable then they will not be successfully imported. Wait until DRPolicy validation succeeds. Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with a unique name. Example output: Refresh the RHACM console to make the DR monitoring dashboard tab accessible if it was enabled on the Active hub cluster. Once all components are recovered, edit the global KlusterletConfig on the new hub and remove the parameter appliedManifestWorkEvictionGracePeriod and its value. If only the active hub cluster is down, restore the hub by performing hub recovery, and restoring the backups on the passive hub. If the managed clusters are still accessible, no further action is required. If the primary managed cluster is down, along with the active hub cluster, you need to fail over the workloads from the primary managed cluster to the secondary managed cluster. For failover instructions, based on your workload type, see Subscription-based applications or ApplicationSet-based applications . Verify that the failover is successful. If the Primary managed cluster is also down, then the PROGRESSION status for the workload would be in Cleaning Up phase until the down Primary managed cluster is back online and successfully imported into the RHACM console. On the passive hub cluster, run the following command to check the PROGRESSION status. Chapter 4. Regional-DR solution for OpenShift Data Foundation This section of the guide provides details of the Regional Disaster Recovery (Regional-DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM). Though Regional-DR solution is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. This is a general overview of the Regional-DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . 4.1. Components of Regional-DR solution Regional-DR is composed of Red Hat Advanced Cluster Management for Kubernetes and OpenShift Data Foundation components to provide application and data mobility across Red Hat OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane. Managed clusters: components that run on the clusters that are managed. For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack. Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift Data Foundation stack is now enhanced with the following abilities for disaster recovery: Enable RBD block pools for mirroring across OpenShift Data Foundation instances (clusters) Ability to mirror specific images within an RBD block pool Provides csi-addons to manage per Persistent Volume Claim (PVC) mirroring OpenShift DR OpenShift DR is a set of orchestrators to configure and manage stateful applications across a set of peer OpenShift clusters which are managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 4.2. Regional-DR deployment workflow This section provides an overview of the steps required to configure and deploy Regional-DR capabilities using the latest version of Red Hat OpenShift Data Foundation across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Red Hat Advanced Cluster Management (RHACM). To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the three: Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Regional-DR . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Creating OpenShift Data Foundation cluster on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note There can be more than a single policy. Testing your disaster recovery solution with: Subscription-based application: Create Subscription-based applications. See Creating sample application . Test failover and relocate operations using the sample subscription-based application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . Discovered applications Ensure all requirements mentioned in Prerequisites is addressed. See Prerequisites for disaster recovery protection of discovered applications Create a sample discovered application. See Creating a sample discovered application Enroll the discovered application. See Enrolling a sample discovered application for disaster recovery protection Test failover and relocate. See Discovered application failover and relocate 4.3. Requirements for enabling Regional-DR The prerequisites to installing a disaster recovery solution supported by Red Hat OpenShift Data Foundation are as follows: You must have three OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management (RHACM) for Kubernetes operator is installed. Primary managed cluster where OpenShift Data Foundation is running. Secondary managed cluster where OpenShift Data Foundation is running. Note For configuring hub recovery setup, you need a 4th cluster which acts as the passive hub. The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For more information, see Configuring passive hub cluster for hub recovery . Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important Ensure that application traffic routing and redirection are configured appropriately. On the Hub cluster Navigate to All Clusters -> Infrastructure -> Clusters . Import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Connect the private OpenShift cluster and service networks using the RHACM Submariner add-ons. Verify that the two clusters have non-overlapping service and cluster private networks. Otherwise, ensure that the Globalnet is enabled during the Submariner add-ons installation. Run the following command for each of the managed clusters to determine if Globalnet needs to be enabled. The following example is for non-overlapping cluster and service networks so Globalnet would not be enabled. Example output for Primary cluster: Example output for Secondary cluster: Additionally, if Submariner and OpenShift Data Foundation are already installed on the managed clusters, use the OpenShift Data Foundation command line interface (CLI) tool to get additional information about the clusters. This information can determine the need for enabling Globalnet during submariner installation based on the clusters service and private networks. Download the the OpenShift Data Foundation CLI tool from the customer portal . Run the following command on one of the two managed clusters, where PeerManagedClusterName(ClusterID is the name of the peer OpenShift Data Foundation cluster: If submariner is not installed with Globalnet on clusters with overlapping services, the following output shows: Note This requires Submariner to be uninstalled and then reinstalled with Globalnet enabled. If submariner is installed with Globalnet on clusters with overlapping services, the following output shows: If submariner is installed without Globalnet on clusters with non-overlapping services, the following output shows: If submariner is installed with Globalnet on clusters with non-overlapping services, the following output shows: For more information, see Submariner documentation . 4.4. Creating an OpenShift Data Foundation cluster on managed clusters In order to configure storage replication between the two OpenShift Container Platform clusters, create an OpenShift Data Foundation storage system after you install the OpenShift Data Foundation operator. Note Refer to OpenShift Data Foundation deployment guides and instructions that are specific to your infrastructure (AWS, VMware, BM, Azure, etc.). Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. For information about the OpenShift Data Foundation deployment, refer to your infrastructure specific deployment guides (for example, AWS, VMware, Bare metal, Azure). Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): If the status result is Ready for both queries on the Primary managed cluster and the Secondary managed cluster , then continue with the step. In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources and verify that the Status of StorageCluster is Ready and has a green tick mark to it. [Optional] If Globalnet was enabled when Submariner was installed, then edit the StorageCluster after the OpenShift Data Foundation install finishes. For Globalnet networks, manually edit the StorageCluster yaml to add the clusterID and set enabled to true . Replace <clustername> with your RHACM imported or newly created managed cluster name. Edit the StorageCluster on both the Primary managed cluster and the Secondary managed cluster. Warning Do not make this change in the StorageCluster unless you enabled Globalnet when Submariner was installed. After the above changes are made, Wait for the OSD pods to restart and OSD services to be created. Wait for all MONS to failover. Ensure that the MONS and OSD services are exported. Ensure that cluster is in a Ready state and cluster health has a green tick indicating Health ok . Verify using step 3. 4.5. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 4.6. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 4.7. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters -> Data Services -> Disaster recovery . On the Overview tab, click Create a disaster recovery policy or you can navigate to Policies tab and click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4bos1-ocp4bos2-5m ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Replication policy is automatically set to Asynchronous based on the OpenShift clusters selected and a Sync schedule option will become available. Set Sync schedule . Important For every desired replication interval a new DRPolicy must be created with a unique name (such as: ocp4bos1-ocp4bos2-10m ). The same clusters can be selected but the Sync schedule can be configured with a different replication interval in minutes/hours/days. The minimum is one minute. Optional : Expand Advanced settings and check the Enable disaster recovery support for restored and cloned PersistentVolumeClaims (For Data Foundation only) checkbox. See Disaster recovery protection for cloned and restored RBD volumes for more details. Note This option should only be used with discovered applications. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run commands for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster. Note On the initial run, VolSync operator is installed automatically. VolSync is used to set up volume replication between two clusters to protect CephFs-based PVCs. The replication feature is enabled by default. Verify that the status of the OpenShift Data Foundation mirroring daemon health on the Primary managed cluster and the Secondary managed cluster . Example output: Caution It could take up to 10 minutes for the daemon_health and health to go from Warning to OK . If the status does not become OK eventually, then use the RHACM console to verify that the Submariner connection between managed clusters is still in a healthy state. Do not proceed until all values are OK . 4.8. Disaster recovery protection for cloned and restored RBD volumes Cloned and restored RBD volumes are copy-on-write clones of the corresponding datasources. Such RBD volumes are still linked to the parent volume to optimize space consumption and Ceph cluster performance. In order to be able to mirror RBD images backing such volumes, they need to be separated from the parent RBD image. Selecting the advanced option in Step 7 of Creating Disaster Recovery Policy on Hub cluster instructs CephCSI driver to perform necessary operations on the cloned or restored RBD volumes to prepare them for Disaster Recovery protection. The necessary operation has the following consequences: Space consumption may increase up to the total capacity of all cloned and restored RBD volumes being protected. It is recommended to make sure there is enough space available in the cluster to avoid the cluster getting full. Ceph cluster resource usage increases. The time taken for the operation to complete depends on the number of the RBD volumes, size of each volume, Ceph cluster resource availability, and client IO going on the RBD volume. Therefore, the amount of time required for completion of the operation cannot be determined. 4.9. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 4.9.1. Subscription-based applications 4.9.1.1. Creating a sample Subscription-based application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions that belong together, refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they are also DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Branch as release-4.17 . Choose one of the following Path busybox-odr to use RBD Regional-DR. busybox-odr-cephfs to use CephFS Regional-DR. Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Log in to your managed cluster where busybox was deployed by RHACM. Example output: 4.9.1.2. Apply Data policy to sample application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters -> Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy -> Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. You can also use the Add application resource option to add multiple resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Click View more details to view the status of ongoing activities with the policy in use with the application. Optional: Verify RADOS block device (RBD) volumereplication and volumereplicationgroup on the primary cluster. Example output: Example output: Optional: Verify CephFS volsync replication source has been set up successfully in the primary cluster and VolSync ReplicationDestination has been set up in the failover cluster. Example output: Example output: 4.9.2. ApplicationSet-based applications 4.9.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on all three clusters: Hub cluster , Primary managed cluster and Secondary managed cluster . For instructions, see Installing Red Hat OpenShift GitOps Operator in web console . On the Hub cluster, ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Ensure that you have created the ClusterRoleBinding yaml on both the Primary and Secondary managed clusters. For instruction, see the Prerequisites chapter in RHACM documentation . Procedure On the Hub cluster, navigate to All Clusters -> Applications and click Create application . Choose the application type as Argo CD ApplicationSet - Pull model . In the General step, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.17 Choose one of the following Path: busybox-odr to use RBD Regional-DR. busybox-odr-cephfs to use CephFS Regional-DR. Enter Remote namespace value. (example, busybox-sample) and click . Choose the Sync policy settings as per your requirement or go with the default selections, and then click . You can choose one or more options. In Label expressions, add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 4.9.2.2. Apply Data policy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters -> Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy -> Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Optional: Verify Rados block device (RBD) volumereplication and volumereplicationgroup on the primary cluster. Example output: Example output: Optional: Verify CephFS volsync replication source has been setup successfully in the primary cluster and VolSync ReplicationDestination has been setup in the failover cluster. Example output: Example output: 4.9.3. Deleting sample application This section provides instructions for deleting the sample application busybox using the RHACM console. Important When deleting a DR protected application, access to both clusters that belong to the DRPolicy is required. This is to ensure that all protected API resources and resources in the respective S3 stores are cleaned up as part of removing the DR protection. If access to one of the clusters is not healthy, deleting the DRPlacementControl resource for the application, on the hub, would remain in the Deleting state. Prerequisites These instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, delete the DRPlacementControl if it is not auto-deleted after deleting the busybox application. Log in to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . For ApplicationSet applications, select the project as openshift-gitops . Click OpenShift DR Hub Operator and then click the DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 4.10. Subscription-based application failover between managed clusters Failover is a process that transitions an application from a primary cluster to a secondary cluster in the event of a primary cluster failure. While failover provides the ability for the application to run on the secondary cluster with minimal interruption, making an uninformed failover decision can have adverse consequences, such as complete data loss in the event of unnoticed replication failure from primary to secondary cluster. If a significant amount of time has gone by since the last successful replication, it's best to wait until the failed primary is recovered. LastGroupSyncTime is a critical metric that reflects the time since the last successful replication occurred for all PVCs associated with an application. In essence, it measures the synchronization health between the primary and secondary clusters. So, prior to initiating a failover from one cluster to another, check for this metric and only initiate the failover if the LastGroupSyncTime is within a reasonable time in the past. Note During the course of failover the Ceph-RBD mirror deployment on the failover cluster is scaled down to ensure a clean failover for volumes that are backed by Ceph-RBD as the storage provisioner. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Run the following command on the Hub Cluster to check if lastGroupSyncTime is within an acceptable data loss window, when compared to current time. Example output: Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Important If there are data inconsistencies caused by synchronization delays, a warning message appears stating Inconsistent data on target cluster . This alerts to the possibility of data loss if the failover is initiated. The message is no longer displayed when data synchronization is complete. Click Initiate . The busybox application is now failing over to the Secondary-managed cluster . Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. Verify that you can see one or more policy names and the ongoing activities (Last sync time and Activity status) associated with the policy in use with the application. 4.11. ApplicationSet-based application failover between managed clusters Failover is a process that transitions an application from a primary cluster to a secondary cluster in the event of a primary cluster failure. While failover provides the ability for the application to run on the secondary cluster with minimal interruption, making an uninformed failover decision can have adverse consequences, such as complete data loss in the event of unnoticed replication failure from primary to secondary cluster. If a significant amount of time has gone by since the last successful replication, it's best to wait until the failed primary is recovered. LastGroupSyncTime is a critical metric that reflects the time since the last successful replication occurred for all PVCs associated with an application. In essence, it measures the synchronization health between the primary and secondary clusters. So, prior to initiating a failover from one cluster to another, check for this metric and only initiate the failover if the LastGroupSyncTime is within a reasonable time in the past. Note During the course of failover the Ceph-RBD mirror deployment on the failover cluster is scaled down to ensure a clean failover for volumes that are backed by Ceph-RBD as the storage provisioner. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Run the following command on the Hub Cluster to check if lastGroupSyncTime is within an acceptable data loss window, when compared to current time. Example output: Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Important If there are data inconsistencies caused by synchronization delays, a warning message appears stating Inconsistent data on target cluster . This alerts to the possibility of data loss if the failover is initiated. The message is no longer displayed when data synchronization is complete. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 4.12. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Perform relocate when lastGroupSyncTime is within the replication interval (for example, 5 minutes) when compared to current time. This is recommended to minimize the Recovery Time Objective (RTO) for any single application. Run this command on the Hub Cluster: Example output: Compare the output time (UTC) to current time to validate that all lastGroupSyncTime values are within their application replication interval. If not, wait to Relocate until this is true for all lastGroupSyncTime values. Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Important If there are data inconsistencies caused by synchronization delays, a warning message appears stating Inconsistent data on target cluster . This alerts to the possibility of data loss if the relocate is initiated. The message is no longer displayed when data synchronization is complete. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. Verify that you can see one or more policy names and the ongoing activities (Last sync time and Activity status) associated with the policy in use with the application. 4.13. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console -> Infrastructure -> Clusters -> Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Perform relocate when lastGroupSyncTime is within the replication interval (for example, 5 minutes) when compared to current time. This is recommended to minimize the Recovery Time Objective (RTO) for any single application. Run this command on the Hub Cluster: Example output: Compare the output time (UTC) to current time to validate that all lastGroupSyncTime values are within their application replication interval. If not, wait to Relocate until this is true for all lastGroupSyncTime values. Procedure On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Important If there are data inconsistencies caused by synchronization delays, a warning message appears stating Inconsistent data on target cluster . This alerts to the possibility of data loss if the relocate is initiated. The message is no longer displayed when data synchronization is complete. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application. 4.14. Disaster recovery protection for discovered applications Red Hat OpenShift Data Foundation now provides disaster recovery (DR) protection and support for workloads that are deployed in one of the managed clusters directly without using Red Hat Advanced Cluster Management (RHACM). These workloads are called discovered applications. Workloads deployed using RHACM are termed managed applications, while those deployed directly on one of the managed clusters without using RHACM are called discovered applications. Although RHACM displays the details of both types of workloads, it does not manage the lifecycle (create, delete, edit) of discovered applications. 4.14.1. Prerequisites for disaster recovery protection of discovered applications This section provides instructions to guide you through the prerequisites for protecting discovered applications. This includes tasks such as assigning a data policy and initiating DR actions such as failover and relocate. Ensure that all the DR configurations have been installed on the Primary managed cluster and the Secondary managed cluster. Install the OADP 1.4 operator. Note Any version before OADP 1.4 will not work for protecting discovered applications. On the Primary and Secondary managed cluster , navigate to OperatorHub and use the keyword filter to search for OADP . Click the OADP tile. Keep all default settings and click Install . Ensure that the operator resources are installed in the openshift-adp project. Note If OADP 1.4 is installed after DR configuration has been completed then the ramen-dr-cluster-operator pods on the Primary managed cluster and the Secondary managed cluster in namespace openshift-dr-system must be restarted (deleted and recreated). [Optional] Add CACertificates to ramen-hub-operator-config ConfigMap . Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. If you are using self-signed certificates, then you have already created a ConfigMap named user-ca-bundle in the openshift-config namespace and added this ConfigMap to the default Proxy cluster resource. Find the encoded value for the CACertificates. Add this base64 encoded value to the configmap ramen-hub-operator-config on the Hub cluster. Example below shows where to add CACertificates. Verify that there are DR secrets created in the OADP operator default namespace openshift-adp on the Primary managed cluster and the Secondary managed cluster . The DR secrets that were created when the first DRPolicy was created, will be similar to the secrets below. The DR secret name is preceded with the letter v . Note There will be one DR created secret for each managed cluster in the openshift-adp namespace. Verify if the Data Protection Application (DPA) is already installed on each managed cluster in the OADP namespace openshift-adp . If not already created then follow the step to create this resource. Create the DPA by copying the following YAML definition content to dpa.yaml . Create the DPA resource. Verify that the OADP resources are created and are in Running state. 4.14.2. Creating a sample discovered application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate for discovered applications, you need a sample application that is installed without using the RHACM create application capability. Procedure Log in to the Primary managed cluster and clone the sample application repository. Verify that you are on the main branch. The correct directory should be used when creating the sample application based on your scenario, metro or regional. Note Only applications using CephRBD or block volumes are supported for discovered applications. Create a project named busybox-discovered on both the Primary and Secondary managed clusters . Create the busybox application on the Primary managed cluster . This sample application example is for Regional-DR using a block (Ceph RBD) volume. Note OpenShift Data Foundation Disaster Recovery solution now extends protection to discovered applications that span across multiple namespaces. Verify that busybox is running in the correct project on the Primary managed cluster . 4.14.3. Enrolling a sample discovered application for disaster recovery protection This section guides you on how to apply an existing DR Policy to a discovered application from the Protected applications tab. Prerequisites Ensure that Disaster Recovery has been configured and that at least one DR Policy has been created. Procedure On RHACM console, navigate to Disaster recovery -> Protected applications tab. Click Enroll application to start configuring existing applications for DR protection. Select ACM discovered applications . In the Namespace page, choose the DR cluster which is the name of the Primary managed cluster where busybox is installed. Select namespace where the application is installed. For example, busybox-discovered . Note If you have workload spread across multiple namespaces then you can select all of those namespaces to DR protect. Choose a unique Name , for example busybox-rbd , for the discovered application and click . In the Configuration page, select either Resource label or Recipe . Resource label is used to protect your resources where you can set which resources will be included in the kubernetes-object backup and what volume's persistent data will be replicated. If you selected Resource label , provide label expressions and PVC label selector. Choose the label appname=busybox for both the kubernetes-objects and for the PVC(s) . If you selected Recipe , then from the Recipe list select the name of the recipe. Important The recipe resource must be created in the application namespace on both managed clusters before enrolling an application for disaster recovery. Click . In the Replication page, select an existing DR Policy and the kubernetes-objects backup interval . Note It is recommended to choose the same duration for the PVC data replication and kubernetes-object backup interval (i.e., 5 minutes). Click . Review the configuration and click Save . Use the Back button to go back to the screen to correct any issues. Verify that the Application volumes (PVCs) and the Kubernetes-objects backup have a Healthy status before proceeding to DR Failover and Relocate testing. You can view the status of your Discovered applications on the Protected applications tab. To see the status of the DRPC, run the following command on the Hub cluster: The discovered applications store resources such as DRPlacementControl (DRPC) and Placement on the Hub cluster in a new namespace called openshift-dr-ops . The DRPC name can be identified by the unique Name configured in prior steps (i.e., busybox-rbd ). To see the status of the VolumeReplicationGroup (VRG) for discovered applications, run the following command on the managed cluster where the busybox application was manually installed. The VRG resource is stored in the namespace openshift-dr-ops after a DR Policy is assigned to the discovered application. The VRG name can be identified by the unique Name configured in prior steps (i.e., busybox-rbd ). 4.14.4. Discovered application failover and relocate A protected Discovered application can Failover or Relocate to its peer cluster similar to managed applications . However, there are some additional steps for discovered applications since RHACM does not manage the lifecycle of the application as it does for Managed applications. This section guides you through the Failover and Relocate process for a protected discovered application. Important Never initiate a Failover or Relocate of an application when one or both resource types are in a Warning or Critical status. 4.14.4.1. Failover disaster recovery protected discovered application This section guides you on how to failover a discovered application which is disaster recovery protected. Prerequisites Ensure that the application namespace is created in both managed clusters (for example, busybox-discovered ). Procedure In the RHACM console, navigate to Disaster Recovery -> Protected applications tab. At the end of the application row, click on the Actions menu and choose to initiate Failover . In the Failover application modal window, review the status of the application and the target cluster. Important If there are data inconsistencies caused by synchronization delays, a warning message appears stating Inconsistent data on target cluster . This alerts to the possibility of data loss if the failover is initiated. The message is no longer displayed when data synchronization is complete. Click Initiate . Wait for the Failover process to complete. Verify that the busybox application is running on the Secondary managed cluster . Check the progression status of Failover until the result is WaitOnUserToCleanup . The DRPC name can be identified by the unique Name configured in prior steps (for example, busybox-rbd ). Remove the busybox application from the Primary managed cluster to complete the Failover process. Navigate to the Protected applications tab. You will see a message to remove the application. Navigate to the cloned repository for busybox and run the following commands on the Primary managed cluster where you failed over from. Use the same directory that was used to create the application (for example, odr-regional-rbd ). After deleting the application, navigate to the Protected applications tab and verify that the busybox resources are both in Healthy status. 4.14.4.2. Relocate disaster recovery protected discovered application This section guides you on how to relocate a discovered application which is disaster recovery protected. Procedure In the RHACM console, navigate to Disaster Recovery -> Protected applications tab. At the end of the application row, click on the Actions menu and choose to initiate Relocate . In the Relocate application modal window, review the status of the application and the target cluster. Important If there are data inconsistencies caused by synchronization delays, a warning message appears stating Inconsistent data on target cluster . This alerts to the possibility of data loss if the relocate is initiated. The message is no longer displayed when data synchronization is complete. Click Initiate . Check the progression status of Relocate until the result is WaitOnUserToCleanup . The DRPC name can be identified by the unique Name configured in prior steps (for example, busybox-rbd ). Remove the busybox application from the Secondary managed cluster before Relocate to the Primary managed cluster is completed. Navigate to the cloned repository for busybox and run the following commands on the Secondary managed cluster where you relocated from. Use the same directory that was used to create the application (for example, odr-regional-rbd ). After deleting the application, navigate to the Protected applications tab and verify that the busybox resources are both in Healthy status. Verify that the busybox application is running on the Primary managed cluster . 4.14.5. Disable disaster recovery for protected applications This section guides you to disable disaster recovery resources when you want to delete the protected applications or when the application no longer needs to be protected. Procedure Login to the Hub cluster. List the DRPlacementControl (DRPC) resources. Each DRPC resource was created when the application was assigned a DR policy. Find the DRPC that has a name that includes the unique identifier that you chose when assigning a DR policy (for example, busybox-rbd ) and delete the DRPC. List the Placement resources. Each Placement resource was created when the application was assigned a DR policy. Find the Placement that has a name that includes the unique identifier that you chose when assigning a DR policy (for example, busybox-rbd-placement-1 ) and delete the Placement . 4.15. Recovering to a replacement cluster with Regional-DR When there is a failure with the primary cluster, you get the options to either repair, wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. This solution guides you when replacing a failed primary cluster with a new cluster and enables failback (relocate) to this new cluster. In these instructions, we are assuming that a RHACM managed cluster must be replaced after the applications have been installed and protected. For purposes of this section, the RHACM managed cluster is the replacement cluster , while the cluster that is not replaced is the surviving cluster and the new cluster is the recovery cluster . Note Replacement cluster recovery for Discovered applications is currently not supported. Only Managed applications are supported. Prerequisite Ensure that the Regional-DR environment has been configured with applications installed using Red Hat Advance Cluster Management (RHACM). Ensure that the applications are assigned a Data policy which protects them against cluster failure. Procedure On the Hub cluster, navigate to Applications and failover all protected applications on the failed replacement cluster to the surviving cluster. Validate that all protected applications are running on the surviving cluster before moving to the step. Note The PROGRESSION state for each application DRPlacementControl shows as Cleaning Up . This is to be expected if the replacement cluster is offline or down. From the Hub cluster, delete the DRCluster for the replacement cluster . Note Use --wait=false since the DRCluster will not be deleted until a later step. Disable disaster recovery for each protected application on the surviving cluster. Perform all the sub-steps on the hub cluster. For each application, edit the Placement and ensure that the surviving cluster is selected. Note For Subscription-based applications the associated Placement can be found in the same namespace on the hub cluster similar to the managed clusters. For ApplicationSets-based applications the associated Placement can be found in the openshift-gitops namespace on the hub cluster. Verify that the s3Profile is removed for the replacement cluster by running the following command on the surviving cluster for each protected application's VolumeReplicationGroup. Disable DR for the managed applications in the RHACM console. On the Hub cluster, navigate to All Clusters -> Applications . In the Overview tab, at the end of the protected application row from the action menu, select Manage disaster recovery . Click Remove disaster recovery . Click Confirm remove . Warning Your application will lose disaster recovery protection, preventing volume synchronization (replication) between clusters. Note The application continues to be visible in the Applications Overview menu but the Data policy is removed. Edit each applications Placement and remove the annotation cluster.open-cluster-management.io/experimental-scheduling-disable: "true" . Repeat the process detailed in the last step and the sub-steps for every protected application on the surviving cluster. Disabling DR for protected applications is now completed. on the hub cluster , run the following script to remove all disaster recovery configurations from the surviving cluster and the hub cluster . Note This script uses the command oc delete project openshift-operators to remove the Disaster Recovery (DR) operators in this namespace on the hub cluster. If there are other non-DR operators in this namespace, you must install them again from OperatorHub . After the namespace openshift-operators is automatically created again, add the monitoring label back for collecting the disaster recovery metrics. On the surviving cluster , ensure that the object bucket created during the DR installation is deleted. Delete the object bucket if it was not removed by script. The name of the object bucket used for DR starts with odrbucket . Uninstall Submariner for only the replacement cluster (failed cluster) using the RHACM console. Navigate to Infrastructure -> Clusters -> Clustersets -> Submariner add-ons view and uninstall Submariner for only the replacement cluster. Note The uninstall process of Submariner for the replacement cluster (failed cluster) will stay GREEN and not complete until the replacement cluster has been detached from the RHACM console. Navigate back to Clusters view and detach replacement cluster. Create new OpenShift cluster (recovery cluster) and import into Infrastructure -> Clusters view . Add the new recovery cluster to the Clusterset used by Submariner. Install Submariner add-ons only for the new recovery cluster . Note If GlobalNet is used for the surviving cluster make sure to enable GlobalNet for the recovery cluster as well. Install OpenShift Data Foundation on the recovery cluster . The OpenShift Data Foundation version should be OpenShift Data Foundation 4.17 (or greater) and the same version of ODF as on the surviving cluster. Note Make sure to follow the optional instructions in the documentation to modify the OpenShift Data Foundation storage cluster on the recovery cluster if GlobalNet has been enabled when installing Submariner. On the Hub cluster , install the ODF Multicluster Orchestrator operator from OperatorHub. For instructions, see chapter on Installing OpenShift Data Foundation Multicluster Orchestrator operator . Using the RHACM console, navigate to Data Services -> Disaster recovery -> Policies tab. Select Create DRPolicy and name your policy. Select the recovery cluster and the surviving cluster . Create the policy. For instructions see chapter on Creating Disaster Recovery Policy on Hub cluster . Proceed to the step only after the status of DRPolicy changes to Validated . Verify that cephblockpool IDs remain unchanged. Run the following command on the recovery cluster . The result is the sample output. Run the following command on the surviving cluster . The result is the sample output. Check the RBDPoolIDMapping in the yaml for both the clusters. If RBDPoolIDMapping does not match, then edit the rook-ceph-csi-mapping-config config map of recovery cluster to add the additional or missing RBDPoolIDMapping directly as shown in the following examples. Note After editing the configmap, restart rook-ceph-operator pod in the namespace openshift-storage on the surviving cluster by deleting the pod. Apply the DRPolicy to the applications on the surviving cluster that were originally protected before the replacement cluster failed. Relocate the newly protected applications on the surviving cluster back to the new recovery cluster . Using the RHACM console, navigate to the Applications menu to perform the relocation. Note If any issues are encountered while following this process, then reach out to Red Hat Customer Support . 4.16. Viewing Recovery Point Objective values for disaster recovery enabled applications Recovery Point Objective (RPO) value is the most recent sync time of persistent data from the cluster where the application is currently active to its peer. This sync time helps determine duration of data lost during failover. Note This RPO value is applicable only for Regional-DR during failover. Relocation ensures there is no data loss during the operation, as all peer clusters are available. You can view the Recovery Point Objective (RPO) value of all the protected volumes for their workload on the Hub cluster. Procedure On the Hub cluster, navigate to Applications -> Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. A Data Policies modal page appears with the number of disaster recovery policies applied to each application along with failover and relocation status. On the Data Policies modal page, click the View more details link. A detailed Data Policies modal page is displayed that shows the policy names and the ongoing activities (Last sync, Activity status) associated with the policy that is applied to the application. The Last sync time reported in the modal page, represents the most recent sync time of all volumes that are DR protected for the application. 4.17. Hub recovery using Red Hat Advanced Cluster Management When your setup has active and passive Red Hat Advanced Cluster Management for Kubernetes (RHACM) hub clusters, and in case where the active hub is down, you can use the passive hub to failover or relocate the disaster recovery protected workloads. 4.17.1. Configuring passive hub cluster To perform hub recovery in case the active hub is down or unreachable, follow the procedure in this section to configure the passive hub cluster and then failover or relocate the disaster recovery protected workloads. Procedure Ensure that RHACM operator and MultiClusterHub is installed on the passive hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Before hub recovery, configure backup and restore. See Backup and restore topics of RHACM Business continuity guide. Install the multicluster orchestrator (MCO) operator along with Red Hat OpenShift GitOps operator on the passive RHACM hub prior to the restore. For instructions to restore your RHACM hub, see Installing OpenShift Data Foundation Multicluster Orchestrator operator . Ensure that .spec.cleanupBeforeRestore is set to None for the Restore.cluster.open-cluster-management.io resource. For details, see Restoring passive resources while checking for backups chapter of RHACM documentation. If SSL access across clusters was configured manually during setup, then re-configure SSL access across clusters. For instructions, see Configuring SSL access across clusters chapter. On the passive hub, add a label to openshift-operators namespace to enable basic monitoring of VolumeSyncronizationDelay alert using this command. For alert details, see Disaster recovery alerts chapter. 4.17.2. Switching to passive hub cluster Use this procedure when the active hub is down or unreachable. Procedure During the restore procedure, to avoid eviction of resources when ManifestWorks are not regenerated correctly, you can enlarge the AppliedManifestWork eviction grace period. On the passive hub cluster, check for existing global KlusterletConfig . If global KlusterletConfig exists then edit and set the value for appliedManifestWorkEvictionGracePeriod parameter to a larger value. For example, 24 hours or more. If global KlusterletConfig does not exist, then create the Klusterletconfig using the following yaml: The configuration will be propagated to all the managed clusters automatically. Restore the backups on the passive hub cluster. For information, see Restoring a hub cluster from backup. Important Recovering a failed hub to its passive instance will only restore applications and their DR protected state to its last scheduled backup. Any application that was DR protected after the last scheduled backup would need to be protected again on the new hub. Verify that the restore is complete. Verify that the Primary and Secondary managed clusters are successfully imported into the RHACM console and they are accessible. If any of the managed clusters are down or unreachable then they will not be successfully imported. Wait until DRPolicy validation succeeds before performing any DR operation. Note Submariner is automatically installed once the managed clusters are imported on the passive hub. Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with a unique name. Example output: Refresh the RHACM console to make the DR monitoring dashboard tab accessible if it was enabled on the Active hub cluster. Once all components are recovered, edit the global KlusterletConfig on the new hub and remove the parameter appliedManifestWorkEvictionGracePeriod and its value. If only the active hub cluster is down, restore the hub by performing hub recovery, and restoring the backups on the passive hub. If the managed clusters are still accessible, no further action is required. If the primary managed cluster is down, along with the active hub cluster, you need to fail over the workloads from the primary managed cluster to the secondary managed cluster. For failover instructions, based on your workload type, see Subscription-based applications or ApplicationSet-based applications . Verify that the failover is successful. If the Primary managed cluster is also down, then the PROGRESSION status for the workload would be in the Cleaning Up phase until the down Primary managed cluster is back online and successfully imported into the RHACM console. On the passive hub cluster, run the following command to check the PROGRESSION status. Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation Red Hat OpenShift Data Foundation deployment can be stretched between two different geographical locations to provide the storage infrastructure with disaster recovery capabilities. When faced with a disaster, such as one of the two locations is partially or totally not available, OpenShift Data Foundation deployed on the OpenShift Container Platform deployment must be able to survive. This solution is available only for metropolitan spanned data centers with specific latency requirements between the servers of the infrastructure. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. The following diagram shows the simplest deployment for a stretched cluster: OpenShift nodes and OpenShift Data Foundation daemons In the diagram the OpenShift Data Foundation monitor pod deployed in the Arbiter zone has a built-in tolerance for the master nodes. The diagram shows the master nodes in each Data Zone which are required for a highly available OpenShift Container Platform control plane. Also, it is important that the OpenShift Container Platform nodes in one of the zones have network connectivity with the OpenShift Container Platform nodes in the other two zones. Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. 5.1. Requirements for enabling stretch cluster Ensure you have addressed OpenShift Container Platform requirements for deployments spanning multiple sites. For more information, see knowledgebase article on cluster deployments spanning multiple sites . Ensure that you have at least three OpenShift Container Platform master nodes in three different zones. One master node in each of the three zones. Ensure that you have at least four OpenShift Container Platform worker nodes evenly distributed across the two Data Zones. For stretch clusters on bare metall, use the SSD drive as the root drive for OpenShift Container Platform master nodes. Ensure that each node is pre-labeled with its zone label. For more information, see the Applying topology zone labels to OpenShift Container Platform node section. The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms between zones. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. 5.2. Applying topology zone labels to OpenShift Container Platform nodes During a site outage, the zone that has the arbiter function makes use of the arbiter label. These labels are arbitrary and must be unique for the three locations. For example, you can label the nodes as follows: To apply the labels to the node: <NODENAME> Is the name of the node <LABEL> Is the topology zone label To validate the labels using the example labels for the three zones: <LABEL> Is the topology zone label Alternatively, you can run a single command to see all the nodes with its zone. The stretch cluster topology zone labels are now applied to the appropriate OpenShift Container Platform nodes to define the three locations. step Install the local storage operator from the OpenShift Container Platform web console . 5.3. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.4. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least four worker nodes evenly distributed across two data centers in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see Planning your deployment . Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command-line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you selected Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. steps Create an OpenShift Data Foundation cluster . 5.5. Creating OpenShift Data Foundation cluster Prerequisites Ensure that you have met all the requirements in Requirements for enabling stretch cluster section. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the Create a new StorageClass using the local storage devices option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on selected nodes. Important If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select SSD or NVMe to build a supported configuration. You can select HDDs for unsupported test installations. Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Select Enable arbiter checkbox if you want to use the stretch clusters. This option is available only when all the prerequisites for arbiter are fulfilled and the selected nodes are populated. For more information, see Arbiter stretch cluster requirements in Requirements for enabling stretch cluster . Select the arbiter zone from the dropdown list. Choose a performance profile for Configure performance . You can also configure the performance profile after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Network is set to Default (OVN) if you are using a single network. You can switch to Custom (Multus) if you are using multiple network interfaces and then choose any one of the following: Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . In the Data Protection page, click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. For arbiter mode of deployment: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for the arbiter key in the spec section and ensure enable is set to true . To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . 5.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 5.6.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 5.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 5.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (5 pods are distributed across 3 zones, 2 per data-center zones and 1 in arbiter zone) MGR rook-ceph-mgr-* (2 pods on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods are distributed across 2 data-center zones) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (2 pods are distributed across 2 data-center zones) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node and 1 pod in arbiter zone) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 5.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 5.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 5.6.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 5.7. Install Zone Aware Sample Application Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, stretch cluster setup is configured correctly. Important With latency between the data zones, you can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). The rate of or amount of performance degradation depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with stretch cluster configuration to ensure sufficient application performance for the required service levels. A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader. Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage: Note This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a stretched cluster with zone awareness and high availability. Create a new project. Deploy the example PHP application called file-uploader. Example Output: View the build log and wait until the application is deployed. Example Output: The command prompt returns out of the tail mode after you see Push successful . Note The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence the OpenShift route resource is not created by default. You need to create the route manually. 5.7.1. Scaling the application after installation Procedure Scale the application to four replicas and expose its services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.2. Modify Deployment to be Zone Aware Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints . Add the pod placement rule in the application deployment configuration to make the application zone aware. Run the following command, and review the output: Example Output: Edit the deployment to use the topology zone labels. Add add the following new lines between the Start and End (shown in the output in the step): Example output: Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement. Scaling down to zero pods Example output: Scaling up to four pods Example output: Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones. Example output: Search for the zone labels used. Example output: Use the file-uploader web application using your browser to upload new files. Find the route that is created. Example Output: Point your browser to the web application using the route in the step. The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing. Select an arbitrary file from your local machine and upload it to the application. Click Choose file to select an arbitrary file. Click Upload . Figure 5.1. A simple PHP-based file upload tool Click List uploaded files to see the list of all currently uploaded files. Note The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware. 5.8. Recovering OpenShift Data Foundation stretch cluster Given that the stretch cluster disaster recovery solution is to provide resiliency in the face of a complete or partial site outage, it is important to understand the different methods of recovery for applications and their storage. How the application is architected determines how soon it becomes available again on the active zone. There are different methods of recovery for applications and their storage depending on the site outage. The recovery time depends on the application architecture. The different methods of recovery are as follows: Recovering zone-aware HA applications with RWX storage . Recovering HA applications with RWX storage . Recovering applications with RWO storage . Recovering StatefulSet pods . 5.8.1. Understanding zone failure For the purpose of this section, zone failure is considered as a failure where all OpenShift Container Platform, master and worker nodes in a zone are no longer communicating with the resources in the second data zone (for example, powered down nodes). If communication between the data zones is still partially working (intermittently up or down), the cluster, storage, and network admins should disconnect the communication path between the data zones for recovery to succeed. Important When you install the sample application, power off the OpenShift Container Platform nodes (at least the nodes with OpenShift Data Foundation devices) to test the failure of a data zone in order to validate that your file-uploader application is available, and you can upload new files. 5.8.2. Recovering zone-aware HA applications with RWX storage Applications that are deployed with topologyKey: topology.kubernetes.io/zone have one or more replicas scheduled in each data zone, and are using shared storage, that is, ReadWriteMany (RWX) CephFS volume, terminate themselves in the failed zone after few minutes and new pods are rolled in and stuck in pending state until the zones are recovered. An example of this type of application is detailed in the Install Zone Aware Sample Application section. Important During zone recovery if application pods go into CrashLoopBackOff (CLBO) state with permission denied error while mounting the CephFS volume, then restart the nodes where the pods are scheduled. Wait for some time and then check if the pods are running again. 5.8.3. Recovering HA applications with RWX storage Applications that are using topologyKey: kubernetes.io/hostname or no topology configuration have no protection against all of the application replicas being in the same zone. Note This can happen even with podAntiAffinity and topologyKey: kubernetes.io/hostname in the Pod spec because this anti-affinity rule is host-based and not zone-based. If this happens and all replicas are located in the zone that fails, the application using ReadWriteMany (RWX) storage takes 6-8 minutes to recover on the active zone. This pause is for the OpenShift Container Platform nodes in the failed zone to become NotReady (60 seconds) and then for the default pod eviction timeout to expire (300 seconds). 5.8.4. Recovering applications with RWO storage Applications that use ReadWriteOnce (RWO) storage have a known behavior described in this Kubernetes issue . Because of this issue, if there is a data zone failure, any application pods in that zone mounting RWO volumes (for example, cephrbd based volumes) are stuck with Terminating status after 6-8 minutes and are not re-created on the active zone without manual intervention. Check the OpenShift Container Platform nodes with a status of NotReady . There may be an issue that prevents the nodes from communicating with the OpenShift control plane. However, the nodes may still be performing I/O operations against Persistent Volumes (PVs). If two pods are concurrently writing to the same RWO volume, there is a risk of data corruption. Ensure that processes on the NotReady node are either terminated or blocked until they are terminated. Example solutions: Use an out of band management system to power off a node, with confirmation, to ensure process termination. Withdraw a network route that is used by nodes at a failed site to communicate with storage. Note Before restoring service to the failed zone or nodes, confirm that all the pods with PVs have terminated successfully. To get the Terminating pods to recreate on the active zone, you can either force delete the pod or delete the finalizer on the associated PV. Once one of these two actions are completed, the application pod should recreate on the active zone and successfully mount its RWO storage. Force deleting the pod Force deletions do not wait for confirmation from the kubelet that the pod has been terminated. <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Warning OpenShift Data Foundation does not support taints relating to non-graceful node shutdown for automated pod eviction and volume detachment operations. For information, see Non-graceful node shutdown handling . It is mandatory to ensure that the node is shutdown, or network route to the node is withdrawn, prior to force deleting pods. 5.8.5. Recovering StatefulSet pods Pods that are part of a StatefulSet have a similar issue as pods mounting ReadWriteOnce (RWO) volumes. More information is referenced in the Kubernetes resource StatefulSet considerations . To get the pods part of a StatefulSet to re-create on the active zone after 6-8 minutes you need to force delete the pod with the same requirements (that is, OpenShift Container Platform node powered off or communication disconnected) as pods with RWO volumes. Chapter 6. Disabling disaster recovery for a disaster recovery enabled application This section guides you to disable disaster recovery (DR) for an application deployed using Red Hat Advanced Cluster Management (RHACM). Disabling DR managed applications . Disabling DR discovered applications . 6.1. Disabling DR managed applications On the Hub cluster, navigate to All Clusters -> Applications . In the Overview tab, at the end of the protected application row from the action menu, select Manage disaster recovery . Click Remove disaster recovery . Click Confirm remove . Warning Your application will lose disaster recovery protection, preventing volume synchronization (replication) between clusters. Note The application continues to be visible in the Applications Overview menu but the Data policy is removed. 6.2. Disabling DR discovered applications In the RHACM console, navigate to All Clusters -> Data Services -> Protected applications tab. At the end of the application row, click on the Actions menu and choose Remove disaster recovery . Click Remove in the prompt. Warning Your application will lose disaster recovery protection, preventing volume synchronization (replication) between clusters. Note The application is no longer in the Protected applications tab once the DR is removed. Chapter 7. Monitoring disaster recovery health 7.1. Enable monitoring for disaster recovery Use this procedure to enable basic monitoring for your disaster recovery setup. Procedure On the Hub cluster, open a terminal window Add the following label to openshift-operator namespace. Note You must always add this label for Regional-DR solution. 7.2. Enabling disaster recovery dashboard on Hub cluster This section guides you to enable the disaster recovery dashboard for advanced monitoring on the Hub cluster. For Regional-DR, the dashboard shows monitoring status cards for operator health, cluster health, metrics, alerts and application count. For Metro-DR, you can configure the dashboard to only monitor the ramen setup health and application count. Prerequisites Ensure that you have already installed the following OpenShift Container Platform version 4.17 and have administrator privileges. ODF Multicluster Orchestrator with the console plugin enabled. Red Hat Advanced Cluster Management for Kubernetes 2.11 (RHACM) from Operator Hub. For instructions on how to install, see Installing RHACM . Ensure you have enabled observability on RHACM. See Enabling observability guidelines . Procedure On the Hub cluster, open a terminal window and perform the steps. Create the configmap file named observability-metrics-custom-allowlist.yaml . You can use the following YAML to list the disaster recovery metrics on Hub cluster. For details, see Adding custom metrics . To know more about ramen metrics, see Disaster recovery metrics . In the open-cluster-management-observability namespace, run the following command: After observability-metrics-custom-allowlist yaml is created, RHACM starts collecting the listed OpenShift Data Foundation metrics from all the managed clusters. To exclude a specific managed cluster from collecting the observability data, add the following cluster label to the clusters: observability: disabled . 7.3. Viewing health status of disaster recovery replication relationships Prerequisites Ensure that you have enabled the disaster recovery dashboard for monitoring. For instructions, see chapter Enabling disaster recovery dashboard on Hub cluster . Procedure On the Hub cluster, ensure All Clusters option is selected. Refresh the console to make the DR monitoring dashboard tab accessible. Navigate to Data Services and click Data policies . On the Overview tab, you can view the health status of the operators, clusters and applications. Green tick indicates that the operators are running and available.. Click the Disaster recovery tab to view a list of DR policy details and connected applications. 7.4. Disaster recovery metrics These are the ramen metrics that are scrapped by prometheus. ramen_last_sync_timestamp_seconds ramen_policy_schedule_interval_seconds ramen_last_sync_duration_seconds ramen_last_sync_data_bytes ramen_workload_protection_status Run these metrics from the Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed. 7.4.1. Last synchronization timestamp in seconds This is the time in seconds which gives the time of the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_timestamp_seconds Metrics type Gauge Labels ObjType : Type of the object, here its DRPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Policyname : Name of the DRPolicy SchedulingInterval : Scheduling interval value from DRPolicy Metric value Value is set as Unix seconds which is obtained from lastGroupSyncTime from DRPC status. 7.4.2. Policy schedule interval in seconds This gives the scheduling interval in seconds from DRPolicy. Metric name ramen_policy_schedule_interval_seconds Metrics type Gauge Labels Policyname : Name of the DRPolicy Metric value This is set to a scheduling interval in seconds which is taken from DRPolicy. 7.4.3. Last synchronization duration in seconds This represents the longest time taken to sync from the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_duration_seconds Metrics type Gauge Labels obj_type : Type of the object, here it is DRPC obj_name : Name of the object, here it is DRPC-Name obj_namespace : DRPC namespace scheduling_interval : Scheduling interval value from DRPolicy Metric value The value is taken from lastGroupSyncDuration from DRPC status. 7.4.4. Total bytes transferred from most recent synchronization This value represents the total bytes transferred from the most recent successful synchronization of all PVCs per application. Metric name ramen_last_sync_data_bytes Metrics type Gauge Labels obj_type : Type of the object, here it is DRPC obj_name : Name of the object, here it is DRPC-Name obj_namespace : DRPC namespace scheduling_interval : Scheduling interval value from DRPolicy Metric value The value is taken from lastGroupSyncBytes from DRPC status. 7.4.5. Workload protection status This value provides the application protection status per application that is DR protected. Metric name ramen_workload_protection_status Metrics type Gauge Labels ObjType : Type of the object, here its DRPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Metric value The value is either a "1" or a "0", where "1" indicates application DR protection is healthy and a "0" indicates application protection degraged and potentially unprotected. 7.5. Disaster recovery alerts This section provides a list of all supported alerts associated with Red Hat OpenShift Data Foundation within a disaster recovery environment. Recording rules Record: ramen_sync_duration_seconds Expression Purpose The time interval between the volume group's last sync time and the time now in seconds. Record: ramen_rpo_difference Expression Purpose The difference between the expected sync delay and the actual sync delay taken by the volume replication group. Record: count_persistentvolumeclaim_total Expression Purpose Sum of all PVC from the managed cluster. Alerts Alert: VolumeSynchronizationDelay Impact Critical Purpose Actual sync delay taken by the volume replication group is thrice the expected sync delay. YAML Alert: VolumeSynchronizationDelay Impact Warning Purpose Actual sync delay taken by the volume replication group is twice the expected sync delay. YAML Alert: WorkloadUnprotected Impact Warning Purpose Application protection status is degraded for more than 10 minutes YAML Chapter 8. Troubleshooting disaster recovery 8.1. Troubleshooting Metro-DR 8.1.1. A statefulset application stuck after failover Problem While relocating to a preferred cluster, DRPlacementControl is stuck reporting PROGRESSION as "MovingToSecondary". Previously, before Kubernetes v1.23, the Kubernetes control plane never cleaned up the PVCs created for StatefulSets. This activity was left to the cluster administrator or a software operator managing the StatefulSets. Due to this, the PVCs of the StatefulSets were left untouched when their Pods were deleted. This prevents Ramen from relocating an application to its preferred cluster. Resolution If the workload uses StatefulSets, and relocation is stuck with PROGRESSION as "MovingToSecondary", then run: For each bounded PVC for that namespace that belongs to the StatefulSet, run Once all PVCs are deleted, Volume Replication Group (VRG) transitions to secondary, and then gets deleted. Run the following command After a few seconds to a few minutes, the PROGRESSION reports "Completed" and relocation is complete. Result The workload is relocated to the preferred cluster BZ reference: [ 2118270 ] 8.1.2. DR policies protect all applications in the same namespace Problem While only a single application is selected to be used by a DR policy, all applications in the same namespace will be protected. This results in PVCs, that match the DRPlacementControl spec.pvcSelector across multiple workloads or if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Resolution Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. BZ reference: [ 2128860 ] 8.1.3. During failback of an application stuck in Relocating state Problem This issue might occur after performing failover and failback of an application (all nodes or clusters are up). When performing failback, application is stuck in the Relocating state with a message of Waiting for PV restore to complete. Resolution Use S3 client or equivalent to clean up the duplicate PV objects from the s3 store. Keep only the one that has a timestamp closer to the failover or relocate time. BZ reference: [ 2120201 ] 8.1.4. Relocate or failback might be stuck in Initiating state Problem When a primary cluster is down and comes back online while the secondary goes down, relocate or failback might be stuck in the Initiating state. Resolution To avoid this situation, cut off all access from the old active hub to the managed clusters. Alternatively, you can scale down the ApplicationSet controller on the old active hub cluster either before moving workloads or when they are in the clean-up phase. On the old active hub, scale down the two deployments using the following commands: BZ reference: [ 2243804 ] 8.2. Troubleshooting Regional-DR 8.2.1. rbd-mirror daemon health is in warning state Problem There appears to be numerous cases where WARNING gets reported if mirror service ::get_mirror_service_status calls Ceph monitor to get service status for rbd-mirror . Following a network disconnection, rbd-mirror daemon health is in the warning state while the connectivity between both the managed clusters is fine. Resolution Run the following command in the toolbox and look for leader:false If you see the following in the output: leader: false It indicates that there is a daemon startup issue and the most likely root cause could be due to problems reliably connecting to the secondary cluster. Workaround: Move the rbd-mirror pod to a different node by simply deleting the pod and verify that it has been rescheduled on another node. leader: true or no output Contact Red Hat Support . BZ reference: [ 2118627 ] 8.2.2. volsync-rsync-src pod is in error state as it is unable to resolve the destination hostname Problem VolSync source pod is unable to resolve the hostname of the VolSync destination pod. The log of the VolSync Pod consistently shows an error message over an extended period of time similar to the following log snippet. Example output Resolution Restart submariner-lighthouse-agent on both nodes. 8.2.3. Cleanup and data sync for ApplicationSet workloads remain stuck after older primary managed cluster is recovered post failover Problem ApplicationSet based workload deployments to managed clusters are not garbage collected in cases when the hub cluster fails. It is recovered to a standby hub cluster, while the workload has been failed over to a surviving managed cluster. The cluster that the workload was failed over from, rejoins the new recovered standby hub. ApplicationSets that are DR protected, with a regional DRPolicy, hence starts firing the VolumeSynchronizationDelay alert. Further such DR protected workloads cannot be failed over to the peer cluster or relocated to the peer cluster as data is out of sync between the two clusters. Resolution The workaround requires that openshift-gitops operators can own the workload resources that are orphaned on the managed cluster that rejoined the hub post a failover of the workload was performed from the new recovered hub. To achieve this the following steps can be taken: Determine the Placement that is in use by the ArgoCD ApplicationSet resource on the hub cluster in the openshift-gitops namespace. Inspect the placement label value for the ApplicationSet in this field: spec.generators.clusterDecisionResource.labelSelector.matchLabels This would be the name of the Placement resource <placement-name> Ensure that there exists a PlacemenDecision for the ApplicationSet referenced Placement . This results in a single PlacementDecision that places the workload in the currently desired failover cluster. Create a new PlacementDecision for the ApplicationSet pointing to the cluster where it should be cleaned up. For example: Update the newly created PlacementDecision with a status subresource . Watch and ensure that the Application resource for the ApplicationSet has been placed on the desired cluster In the output, check if the SYNC STATUS shows as Synced and the HEALTH STATUS shows as Healthy . Delete the PlacementDecision that was created in step (3), such that ArgoCD can garbage collect the workload resources on the <managedcluster-name-to-clean-up> ApplicationSets that are DR protected, with a regional DRPolicy, stops firing the VolumeSynchronizationDelay alert. BZ reference: [ 2268594 ] 8.3. Troubleshooting 2-site stretch cluster with Arbiter 8.3.1. Recovering workload pods stuck in ContainerCreating state post zone recovery Problem After performing complete zone failure and recovery, the workload pods are sometimes stuck in ContainerCreating state with the any of the below errors: MountDevice failed to create newCsiDriverClient: driver name openshift-storage.rbd.csi.ceph.com not found in the list of registered CSI drivers MountDevice failed for volume <volume_name> : rpc error: code = Aborted desc = an operation with the given Volume ID <volume_id> already exists MountVolume.SetUp failed for volume <volume_name> : rpc error: code = Internal desc = staging path <path> for volume <volume_id> is not a mountpoint Resolution If the workload pods are stuck with any of the above mentioned errors, perform the following workarounds: For ceph-fs workload stuck in ContainerCreating : Restart the nodes where the stuck pods are scheduled Delete these stuck pods Verify that the new pods are running For ceph-rbd workload stuck in ContainerCreating that do not self recover after sometime Restart csi-rbd plugin pods in the nodes where the stuck pods are scheduled Verify that the new pods are running
[ "subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0", "subscription-manager repos --disable=\"*\" --enable=\"rhel9-for-x86_64-baseos-rpms\" --enable=\"rhel9-for-x86_64-appstream-rpms\"", "dnf update -y reboot", "subscription-manager repos --enable=\"ansible-2.9-for-rhel-9-x86_64-rpms\" --enable=\"rhceph-6-tools-for-rhel-9-x86_64-rpms\"", "hostnamectl set-hostname <short_name>", "hostname", "ceph1", "DOMAIN=\"example.domain.com\" cat <<EOF >/etc/hosts 127.0.0.1 USD(hostname).USD{DOMAIN} USD(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 USD(hostname).USD{DOMAIN} USD(hostname) localhost6 localhost6.localdomain6 EOF", "hostname -f", "ceph1.example.domain.com", "sudo dnf install -y cephadm-ansible", "cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF", "cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF", "ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b", "ceph6 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph4 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph3 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph2 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph5 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph7 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" }", "ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "cat <<EOF > /root/registry.json { \"url\":\"registry.redhat.io\", \"username\":\"User\", \"password\":\"Pass\" } EOF", "cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_type: mds service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080 EOF", "ip a | grep 10.0.40", "10.0.40.78", "cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json", "You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/", "ceph -s", "cluster: id: 3a801754-e01f-11ec-b7ab-005056838602 health: HEALTH_OK services: mon: 5 daemons, quorum ceph1,ceph2,ceph4,ceph5,ceph7 (age 4m) mgr: ceph1.khuuot(active, since 5m), standbys: ceph4.zotfsp osd: 12 osds: 12 up (since 3m), 12 in (since 4m) rgw: 2 daemons active (2 hosts, 1 zones) data: pools: 5 pools, 107 pgs objects: 191 objects, 5.3 KiB usage: 105 MiB used, 600 GiB / 600 GiB avail 105 active+clean", "ceph orch host ls", "HOST ADDR LABELS STATUS ceph1 10.0.40.78 _admin osd mon mgr ceph2 10.0.40.35 osd mon ceph3 10.0.40.24 osd mds rgw ceph4 10.0.40.185 osd mon mgr ceph5 10.0.40.88 osd mon ceph6 10.0.40.66 osd mds rgw ceph7 10.0.40.221 mon", "ceph orch ps | grep mon | awk '{print USD1 \" \" USD2}'", "mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7", "ceph orch ps | grep mgr | awk '{print USD1 \" \" USD2}'", "mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5", "ceph osd tree", "ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87900 root default -16 0.43950 datacenter DC1 -11 0.14650 host ceph1 2 ssd 0.14650 osd.2 up 1.00000 1.00000 -3 0.14650 host ceph2 3 ssd 0.14650 osd.3 up 1.00000 1.00000 -13 0.14650 host ceph3 4 ssd 0.14650 osd.4 up 1.00000 1.00000 -17 0.43950 datacenter DC2 -5 0.14650 host ceph4 0 ssd 0.14650 osd.0 up 1.00000 1.00000 -9 0.14650 host ceph5 1 ssd 0.14650 osd.1 up 1.00000 1.00000 -7 0.14650 host ceph6 5 ssd 0.14650 osd.5 up 1.00000 1.00000", "ceph osd pool create 32 32 ceph osd pool application enable rbdpool rbd", "ceph osd lspools | grep rbdpool", "3 rbdpool", "ceph orch ps | grep mds", "mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9", "ceph fs volume create cephfs", "ceph fs status", "cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph6.ggjywj Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 96.0k 284G cephfs.cephfs.data data 0 284G STANDBY MDS cephfs.ceph3.ogcqkl", "ceph orch ps | grep rgw", "rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9", "ceph mon dump | grep election_strategy", "dumped monmap epoch 9 election_strategy: 1", "ceph mon set election_strategy connectivity", "ceph mon dump | grep election_strategy", "dumped monmap epoch 10 election_strategy: 3", "ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3", "ceph mon dump", "epoch 17 fsid dd77f050-9afe-11ec-a56c-029f8148ea14 last_changed 2022-03-04T07:17:26.913330+0000 created 2022-03-03T14:33:22.957190+0000 min_mon_release 16 (pacific) election_strategy: 3 0: [v2:10.0.143.78:3300/0,v1:10.0.143.78:6789/0] mon.ceph1; crush_location {datacenter=DC1} 1: [v2:10.0.155.185:3300/0,v1:10.0.155.185:6789/0] mon.ceph4; crush_location {datacenter=DC2} 2: [v2:10.0.139.88:3300/0,v1:10.0.139.88:6789/0] mon.ceph5; crush_location {datacenter=DC2} 3: [v2:10.0.150.221:3300/0,v1:10.0.150.221:6789/0] mon.ceph7; crush_location {datacenter=DC3} 4: [v2:10.0.155.35:3300/0,v1:10.0.155.35:6789/0] mon.ceph2; crush_location {datacenter=DC1}", "dnf -y install ceph-base", "ceph osd getcrushmap > /etc/ceph/crushmap.bin", "crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt", "vim /etc/ceph/crushmap.txt", "rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit } end crush map", "crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin", "ceph osd setcrushmap -i /etc/ceph/crushmap2.bin", "17", "ceph osd crush rule ls", "replicated_rule stretch_rule", "ceph mon enable_stretch_mode ceph7 stretch_rule datacenter", "for pool in USD(rados lspools);do echo -n \"Pool: USD{pool}; \";ceph osd pool get USD{pool} crush_rule;done", "Pool: device_health_metrics; crush_rule: stretch_rule Pool: cephfs.cephfs.meta; crush_rule: stretch_rule Pool: cephfs.cephfs.data; crush_rule: stretch_rule Pool: .rgw.root; crush_rule: stretch_rule Pool: default.rgw.log; crush_rule: stretch_rule Pool: default.rgw.control; crush_rule: stretch_rule Pool: default.rgw.meta; crush_rule: stretch_rule Pool: rbdpool; crush_rule: stretch_rule", "ceph orch ps | grep rgw.objectgw", "rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp", "host ceph3.example.com host ceph6.example.com", "ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --<rgw-endpoint> XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json", "oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'", "oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'", "oc label nodes --all metro-dr.openshift-storage.topology.io/datacenter=DC1", "oc patch storageclusters.ocs.openshift.io -n openshift-storage ocs-external-storagecluster -p '{\"spec\":{\"csi\":{\"readAffinity\":{\"enabled\":true,\"crushLocationLabels\":[\"metro-dr.openshift-storage.topology.io/datacenter\"]}}}}' --type=merge", "oc delete po -n openshift-storage -l 'app in (csi-cephfsplugin,csi-rbdplugin)'", "oc label nodes --all metro-dr.openshift-storage.topology.io/datacenter=DC2", "oc patch storageclusters.ocs.openshift.io -n openshift-storage ocs-external-storagecluster -p '{\"spec\":{\"csi\":{\"readAffinity\":{\"enabled\":true,\"crushLocationLabels\":[\"metro-dr.openshift-storage.topology.io/datacenter\"]}}}}' --type=merge", "oc delete po -n openshift-storage -l 'app in (csi-cephfsplugin,csi-rbdplugin)'", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt", "apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config", "oc create -f cm-clusters-crt.yaml", "configmap/user-ca-bundle created", "oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'", "proxy.config.openshift.io/cluster patched", "oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'", "Succeeded", "oc get drclusters", "NAME AGE ocp4perf1 4m42s ocp4perf2 4m42s", "oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'", "Succeeded", "oc get csv,pod -n openshift-dr-system", "NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.15.0 Openshift DR Cluster Operator 4.15.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h", "get secrets -n openshift-dr-system | grep Opaque", "get cm -n openshift-operators ramen-hub-operator-config -oyaml", "oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type==\"ExternalIP\")].address}{\"\\n\"}{end}'", "10.70.56.118 10.70.56.193 10.70.56.154 10.70.56.242 10.70.56.136 10.70.56.99", "oc get drcluster", "NAME AGE ocp4perf1 5m35s ocp4perf2 5m35s", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: s3ProfileName: s3profile-<drcluster_name>-ocs-external-storagecluster ## Add this section cidrs: - <IP_Address1>/32 - <IP_Address2>/32 - <IP_Address3>/32 - <IP_Address4>/32 - <IP_Address5>/32 - <IP_Address6>/32 [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: ## Add this section annotations: drcluster.ramendr.openshift.io/storage-clusterid: openshift-storage drcluster.ramendr.openshift.io/storage-driver: openshift-storage.rbd.csi.ceph.com drcluster.ramendr.openshift.io/storage-secret-name: rook-csi-rbd-provisioner drcluster.ramendr.openshift.io/storage-secret-namespace: openshift-storage [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get pods,pvc -n busybox-sample", "NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s", "tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls", "oc get configmap user-ca-bundle -n openshift-config -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" |base64 -w 0", "oc edit configmap ramen-hub-operator-config -n openshift-operators", "[...] ramenOpsNamespace: openshift-dr-ops s3StoreProfiles: - s3Bucket: odrbucket-36bceb61c09c s3CompatibleEndpoint: https://s3-openshift-storage.apps.hyper3.vmw.ibmfusion.eu s3ProfileName: s3profile-hyper3-ocs-storagecluster s3Region: noobaa s3SecretRef: name: 60f2ea6069e168346d5ad0e0b5faa59bb74946f caCertificates: {input base64 encoded value here} - s3Bucket: odrbucket-36bceb61c09c s3CompatibleEndpoint: https://s3-openshift-storage.apps.hyper4.vmw.ibmfusion.eu s3ProfileName: s3profile-hyper4-ocs-storagecluster s3Region: noobaa s3SecretRef: name: cc237eba032ad5c422fb939684eb633822d7900 caCertificates: {input base64 encoded value here}", "oc get secrets -n openshift-adp NAME TYPE DATA AGE v60f2ea6069e168346d5ad0e0b5faa59bb74946f Opaque 1 3d20h vcc237eba032ad5c422fb939684eb633822d7900 Opaque 1 3d20h [...]", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: labels: app.kubernetes.io/component: velero name: velero namespace: openshift-adp spec: backupImages: false configuration: nodeAgent: enable: false uploaderType: restic velero: defaultPlugins: - openshift - aws noDefaultBackupLocation: true", "oc create -f dpa.yaml -n openshift-adp", "dataprotectionapplication.oadp.openshift.io/velero created", "oc get pods,dpa -n openshift-adp NAME READY STATUS RESTARTS AGE pod/openshift-adp-controller-manager-7b64b74fcd-msjbs 1/1 Running 0 5m30s pod/velero-694b5b8f5c-b4kwg 1/1 Running 0 3m31s NAME AGE dataprotectionapplication.oadp.openshift.io/velero 3m31s", "git clone https://github.com/red-hat-storage/ocm-ramen-samples.git", "cd ~/ocm-ramen-samples git branch * main", "ls workloads/deployment | egrep -v 'cephfs|k8s|base' odr-metro-rbd odr-regional-rbd", "oc new-project busybox-discovered", "oc apply -k workloads/deployment/odr-metro-rbd -n busybox-discovered persistentvolumeclaim/busybox-pvc created deployment.apps/busybox created", "oc get pods,pvc,deployment -n busybox-discovered", "NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 18s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 18s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/busybox 1/1 1 1 18", "oc get drpc {drpc_name} -o wide -n openshift-dr-ops", "oc get vrg {vrg_name} -n openshift-dr-ops", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc get pods,pvc -n busybox-discovered NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 2m46s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 2m57s", "oc get drpc {drpc_name} -n openshift-dr-ops -o jsonpath='{.status.progression}{\"\\n\"}' WaitOnUserToCleanUp", "cd ~/ocm-ramen-samples/ git branch * main oc delete -k workloads/deployment/odr-metro-rbd -n busybox-discovered persistentvolumeclaim \"busybox-pvc\" deleted deployment.apps \"busybox\" deleted", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls", "oc get drpc {drpc_name} -n openshift-dr-ops -o jsonpath='{.status.progression}{\"\\n\"}' WaitOnUserToCleanUp", "cd ~/ocm-ramen-samples/ git branch * main oc delete -k workloads/deployment/odr-metro-rbd -n busybox-discovered persistentvolumeclaim \"busybox-pvc\" deleted deployment.apps \"busybox\" deleted", "oc get pods,pvc -n busybox-discovered NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 2m46s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 2m57s", "oc get drpc -n openshift-dr-ops", "oc delete {drpc_name} -n openshift-dr-ops", "oc get placements -n openshift-dr-ops", "oc delete placements {placement_name} -n openshift-dr-ops", "edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add or modify this line clusterFence: Fenced cidrs: [...] [...]", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Modify this line clusterFence: Unfenced cidrs: [...] [...]", "oc delete drcluster <drcluster_name> --wait=false", "oc edit placement <placement_name> -n <namespace>", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...] spec: clusterSets: - submariner predicates: - requiredClusterSelector: claimSelector: {} labelSelector: matchExpressions: - key: name operator: In values: - cluster1 <-- Modify to be surviving cluster name [...]", "oc get vrg -n <application_namespace> -o jsonpath='{.items[0].spec.s3Profiles}' | jq", "oc delete drpc <drpc_name> -n <namespace>", "oc get drpc -A", "oc edit placement <placement_name> -n <namespace>", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: ## Remove this annotation cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...]", "#!/bin/bash secrets=USD(oc get secrets -n openshift-operators | grep Opaque | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do oc patch -n openshift-operators secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge done mirrorpeers=USD(oc get mirrorpeer -o name) echo USDmirrorpeers for mp in USDmirrorpeers do oc patch USDmp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDmp done drpolicies=USD(oc get drpolicy -o name) echo USDdrpolicies for drp in USDdrpolicies do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done drclusters=USD(oc get drcluster -o name) echo USDdrclusters for drp in USDdrclusters do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done delete project openshift-operators managedclusters=USD(oc get managedclusters -o name | cut -d\"/\" -f2) echo USDmanagedclusters for mc in USDmanagedclusters do secrets=USD(oc get secrets -n USDmc | grep multicluster.odf.openshift.io/secret-type | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do set -x oc patch -n USDmc secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete -n USDmc secret/USDsecret done done delete clusterrolebinding spoke-clusterrole-bindings", "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "oc get obc -n openshift-storage", "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: appliedManifestWorkEvictionGracePeriod: \"24h\"", "oc -n <restore-namespace> wait restore <restore-name> --for=jsonpath='{.status.phase}'=Finished --timeout=120s", "oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'", "Succeeded", "oc get drpc -o wide -A", "oc get networks.config.openshift.io cluster -o json | jq .spec", "{ \"clusterNetwork\": [ { \"cidr\": \"10.5.0.0/16\", \"hostPrefix\": 23 } ], \"externalIP\": { \"policy\": {} }, \"networkType\": \"OVNKubernetes\", \"serviceNetwork\": [ \"10.15.0.0/16\" ] }", "{ \"clusterNetwork\": [ { \"cidr\": \"10.6.0.0/16\", \"hostPrefix\": 23 } ], \"externalIP\": { \"policy\": {} }, \"networkType\": \"OVNKubernetes\", \"serviceNetwork\": [ \"10.16.0.0/16\" ] }", "odf get dr-prereq <PeerManagedClusterName(ClusterID)>", "Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is not enabled.", "Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.", "Info: Submariner is installed. Info: Globalnet is not required. Info: Globalnet is not enabled.", "Info: Submariner is installed. Info: Globalnet is not required. Info: Globalnet is enabled.", "oc get storagecluster -n openshift-storage ocs-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'", "oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'", "oc edit storagecluster -o yaml -n openshift-storage", "spec: network: multiClusterService: clusterID: <clustername> enabled: true", "oc get serviceexport -n openshift-storage", "NAME AGE rook-ceph-mon-d 4d14h rook-ceph-mon-e 4d14h rook-ceph-mon-f 4d14h rook-ceph-osd-0 4d14h rook-ceph-osd-1 4d14h rook-ceph-osd-2 4d14h", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt", "apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config", "oc create -f cm-clusters-crt.yaml", "configmap/user-ca-bundle created", "oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'", "proxy.config.openshift.io/cluster patched", "oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'", "Succeeded", "oc get drclusters", "NAME AGE ocp4bos1 4m42s ocp4bos2 4m42s", "oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'", "Succeeded", "oc get csv,pod -n openshift-dr-system", "NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.15.0 Openshift DR Cluster Operator 4.15.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h", "oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{\"\\n\"}'", "{\"daemon_health\":\"OK\",\"health\":\"OK\",\"image_health\":\"OK\",\"states\":{}}", "oc get pods,pvc -n busybox-sample", "NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s", "oc get volumereplications.replication.storage.openshift.io -A", "NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE busybox-pvc 2d16h rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary", "oc get volumereplicationgroups.ramendr.openshift.io -A", "NAME DESIREDSTATE CURRENTSTATE busybox-drpc primary Primary", "oc get replicationsource -n busybox-sample", "NAME SOURCE LAST SYNC DURATION NEXT SYNC busybox-pvc busybox-pvc 2022-12-20T08:46:07Z 1m7.794661104s 2022-12-20T08:50:00Z", "oc get replicationdestination -n busybox-sample", "NAME LAST SYNC DURATION NEXT SYNC busybox-pvc 2022-12-20T08:46:32Z 4m39.52261108s", "tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists", "oc get volumereplications.replication.storage.openshift.io -A", "NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE busybox-pvc 2d16h rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary", "oc get volumereplicationgroups.ramendr.openshift.io -A", "NAME DESIREDSTATE CURRENTSTATE busybox-drpc primary Primary", "oc get replicationsource -n busybox-sample", "NAME SOURCE LAST SYNC DURATION NEXT SYNC busybox-pvc busybox-pvc 2022-12-20T08:46:07Z 1m7.794661104s 2022-12-20T08:50:00Z", "oc get replicationdestination -n busybox-sample", "NAME LAST SYNC DURATION NEXT SYNC busybox-pvc 2022-12-20T08:46:32Z 4m39.52261108s", "oc get drpc -o yaml -A | grep lastGroupSyncTime", "[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"", "oc get drpc -o yaml -A | grep lastGroupSyncTime", "[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"", "oc get drpc -o yaml -A | grep lastGroupSyncTime", "[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"", "oc get drpc -o yaml -A | grep lastGroupSyncTime", "[...] lastGroupSyncTime: \"2023-07-10T12:40:10Z\"", "oc get configmap user-ca-bundle -n openshift-config -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" |base64 -w 0", "oc edit configmap ramen-hub-operator-config -n openshift-operators", "[...] ramenOpsNamespace: openshift-dr-ops s3StoreProfiles: - s3Bucket: odrbucket-36bceb61c09c s3CompatibleEndpoint: https://s3-openshift-storage.apps.hyper3.vmw.ibmfusion.eu s3ProfileName: s3profile-hyper3-ocs-storagecluster s3Region: noobaa s3SecretRef: name: 60f2ea6069e168346d5ad0e0b5faa59bb74946f caCertificates: {input base64 encoded value here} - s3Bucket: odrbucket-36bceb61c09c s3CompatibleEndpoint: https://s3-openshift-storage.apps.hyper4.vmw.ibmfusion.eu s3ProfileName: s3profile-hyper4-ocs-storagecluster s3Region: noobaa s3SecretRef: name: cc237eba032ad5c422fb939684eb633822d7900 caCertificates: {input base64 encoded value here}", "oc get secrets -n openshift-adp NAME TYPE DATA AGE v60f2ea6069e168346d5ad0e0b5faa59bb74946f Opaque 1 3d20h vcc237eba032ad5c422fb939684eb633822d7900 Opaque 1 3d20h [...]", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: labels: app.kubernetes.io/component: velero name: velero namespace: openshift-adp spec: backupImages: false configuration: nodeAgent: enable: false uploaderType: restic velero: defaultPlugins: - openshift - aws noDefaultBackupLocation: true", "oc create -f dpa.yaml -n openshift-adp", "dataprotectionapplication.oadp.openshift.io/velero created", "oc get pods,dpa -n openshift-adp NAME READY STATUS RESTARTS AGE pod/openshift-adp-controller-manager-7b64b74fcd-msjbs 1/1 Running 0 5m30s pod/velero-694b5b8f5c-b4kwg 1/1 Running 0 3m31s NAME AGE dataprotectionapplication.oadp.openshift.io/velero 3m31s", "git clone https://github.com/red-hat-storage/ocm-ramen-samples.git", "cd ~/ocm-ramen-samples git branch * main", "ls workloads/deployment | egrep -v 'cephfs|k8s|base' odr-metro-rbd odr-regional-rbd", "oc new-project busybox-discovered", "oc apply -k workloads/deployment/odr-regional-rbd -n busybox-discovered persistentvolumeclaim/busybox-pvc created deployment.apps/busybox created", "oc get pods,pvc,deployment -n busybox-discovered", "NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 18s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 18s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/busybox 1/1 1 1 18", "oc get drpc {drpc_name} -o wide -n openshift-dr-ops", "oc get vrg {vrg_name} -n openshift-dr-ops", "oc get pods,pvc,volumereplication -n busybox-discovered NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 2m46s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 2m57s NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE volumereplication.replication.storage.openshift.io/busybox-pvc 2m45s rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary", "oc get drpc {drpc_name} -n openshift-dr-ops -o jsonpath='{.status.progression}{\"\\n\"}' WaitOnUserToCleanUp", "cd ~/ocm-ramen-samples/ git branch * main oc delete -k workloads/deployment/odr-regional-rbd -n busybox-discovered persistentvolumeclaim \"busybox-pvc\" deleted deployment.apps \"busybox\" deleted", "oc get drpc {drpc_name} -n openshift-dr-ops -o jsonpath='{.status.progression}{\"\\n\"}' WaitOnUserToCleanUp", "cd ~/ocm-ramen-samples/ git branch * main oc delete -k workloads/deployment/odr-regional-rbd -n busybox-discovered persistentvolumeclaim \"busybox-pvc\" deleted deployment.apps \"busybox\" deleted", "oc get pods,pvc -n busybox-discovered NAME READY STATUS RESTARTS AGE pod/busybox-796fccbb95-qmxjf 1/1 Running 0 2m46s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-b20e4129-902d-47c7-b962-040ad64130c4 1Gi RWO ocs-storagecluster-ceph-rbd <unset> 2m57s NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE volumereplication.replication.storage.openshift.io/busybox-pvc 2m45s rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary", "oc get drpc -n openshift-dr-ops", "oc delete {drpc_name} -n openshift-dr-ops", "oc get placements -n openshift-dr-ops", "oc delete placements {placement_name} -n openshift-dr-ops", "oc delete drcluster <drcluster_name> --wait=false", "oc edit placement <placement_name> -n <namespace>", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...] spec: clusterSets: - submariner predicates: - requiredClusterSelector: claimSelector: {} labelSelector: matchExpressions: - key: name operator: In values: - cluster1 <-- Modify to be surviving cluster name [...]", "oc get vrg -n <application_namespace> -o jsonpath='{.items[0].spec.s3Profiles}' | jq", "oc edit placement {placement_name} -n {namespace}", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: ## Remove this annotation cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...]", "#!/bin/bash secrets=USD(oc get secrets -n openshift-operators | grep Opaque | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do oc patch -n openshift-operators secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge done mirrorpeers=USD(oc get mirrorpeer -o name) echo USDmirrorpeers for mp in USDmirrorpeers do oc patch USDmp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDmp done drpolicies=USD(oc get drpolicy -o name) echo USDdrpolicies for drp in USDdrpolicies do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done drclusters=USD(oc get drcluster -o name) echo USDdrclusters for drp in USDdrclusters do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done delete project openshift-operators managedclusters=USD(oc get managedclusters -o name | cut -d\"/\" -f2) echo USDmanagedclusters for mc in USDmanagedclusters do secrets=USD(oc get secrets -n USDmc | grep multicluster.odf.openshift.io/secret-type | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do set -x oc patch -n USDmc secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete -n USDmc secret/USDsecret done done delete clusterrolebinding spoke-clusterrole-bindings", "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "oc get obc -n openshift-storage", "oc get cm -n openshift-storage rook-ceph-csi-mapping-config -o yaml", "apiVersion: v1 data: csi-mapping-config-json: '[{\"ClusterIDMapping\":{\"openshift-storage\":\"openshift-storage\"},\"RBDPoolIDMapping\":[{\"1\":\"1\"}]}]' kind: ConfigMap [...]", "oc get cm -n openshift-storage rook-ceph-csi-mapping-config -o yaml", "apiVersion: v1 data: csi-mapping-config-json: '[{\"ClusterIDMapping\":{\"openshift-storage\":\"openshift-storage\"},\"RBDPoolIDMapping\":[{\"3\":\"1\"}]}]' kind: ConfigMap [...]", "csi-mapping-config-json: '[{\"ClusterIDMapping\":{\"openshift-storage\":\"openshift-storage\"},\"RBDPoolIDMapping\":[{\"1\":\"1\"}]},{\"ClusterIDMapping\":{\"openshift-storage\":\"openshift-storage\"},\"RBDPoolIDMapping\":[{\"1\":\"3\"}]}]'", "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "apiVersion: config.open-cluster-management.io/v1alpha1 kind: KlusterletConfig metadata: name: global spec: appliedManifestWorkEvictionGracePeriod: \"24h\"", "oc -n <restore-namespace> wait restore <restore-name> --for=jsonpath='{.status.phase}'=Finished --timeout=120s", "oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'", "Succeeded", "oc get drpc -o wide -A", "topology.kubernetes.io/zone=arbiter for Master0 topology.kubernetes.io/zone=datacenter1 for Master1, Worker1, Worker2 topology.kubernetes.io/zone=datacenter2 for Master2, Worker3, Worker4", "oc label node <NODENAME> topology.kubernetes.io/zone= <LABEL>", "oc get nodes -l topology.kubernetes.io/zone= <LABEL> -o name", "oc get nodes -L topology.kubernetes.io/zone", "oc annotate namespace openshift-storage openshift.io/node-selector=", "spec: arbiter: enable: true [..] nodeTopologies: arbiterLocation: arbiter #arbiter zone storageDeviceSets: - config: {} count: 1 [..] replica: 4 status: conditions: [..] failureDomain: zone", "oc new-project my-shared-storage", "oc new-app openshift/php:latest~https://github.com/mashetty330/openshift-php-upload-demo --name=file-uploader", "Found image 4f2dcc0 (9 days old) in image stream \"openshift/php\" under tag \"7.2-ubi8\" for \"openshift/php:7.2- ubi8\" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag \"file-uploader:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources imagestream.image.openshift.io \"file-uploader\" created buildconfig.build.openshift.io \"file-uploader\" created deployment.apps \"file-uploader\" created service \"file-uploader\" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.", "oc logs -f bc/file-uploader -n my-shared-storage", "Cloning \"https://github.com/christianh814/openshift-php-upload-demo\" [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL \"io.openshift.build.commit.author\"=\"Christian Hernandez <christ [email protected]>\" \"io.openshift.build.commit.date\"=\"Sun Oct 1 1 7:15:09 2017 -0700\" \"io.openshift.build.commit.id\"=\"288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2\" \"io.openshift.build.commit.ref\"=\"master\" \" io.openshift.build.commit.message\"=\"trying to modularize\" \"io.openshift .build.source-location\"=\"https://github.com/christianh814/openshift-php-uploa d-demo\" \"io.openshift.build.image\"=\"image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea\" STEP 3: ENV OPENSHIFT_BUILD_NAME=\"file-uploader-1\" OPENSHIFT_BUILD_NAMESP ACE=\"my-shared-storage\" OPENSHIFT_BUILD_SOURCE=\"https://github.com/christ ianh814/openshift-php-upload-demo\" OPENSHIFT_BUILD_COMMIT=\"288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2\" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source => sourcing 20-copy-config.sh ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i => sourcing 00-documentroot.conf => sourcing 50-mpm-tuning.conf => sourcing 40-ssl-certs.sh STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]", "oc expose svc/file-uploader -n my-shared-storage", "oc scale --replicas=4 deploy/file-uploader -n my-shared-storage", "oc get pods -o wide -n my-shared-storage", "oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage", "oc get pvc -n my-shared-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s", "oc get deployment file-uploader -o yaml -n my-shared-storage | less", "[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]", "oc edit deployment file-uploader -n my-shared-storage", "[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: \"\" containers: [...]", "deployment.apps/file-uploader edited", "oc scale deployment file-uploader --replicas=0 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc scale deployment file-uploader --replicas=4 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print USD7}' | sort | uniq -c", "1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr", "oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master", "perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2", "oc get route file-uploader -n my-shared-storage -o jsonpath --template=\"http://{.spec.host}{'\\n'}\"", "http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com", "oc delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>", "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist namespace: open-cluster-management-observability data: metrics_list.yaml: | names: - ceph_rbd_mirror_snapshot_sync_bytes - ceph_rbd_mirror_snapshot_snapshots matches: - __name__=\"csv_succeeded\",exported_namespace=\"openshift-dr-system\",name=~\"odr-cluster-operator.*\" - __name__=\"kube_running_pod_ready\",namespace=~\"volsync-system|openshift-operators\",pod=~\"volsync.*\"", "oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml", "sum by (obj_name, obj_namespace, obj_type, job, policyname)(time() - (ramen_last_sync_timestamp_seconds > 0))", "ramen_sync_duration_seconds{job=\"ramen-hub-operator-metrics-service\"} / on(policyname, job) group_left() (ramen_policy_schedule_interval_seconds{job=\"ramen-hub-operator-metrics-service\"})", "count(kube_persistentvolumeclaim_info)", "alert: VolumeSynchronizationDelay expr: ramen_rpo_difference >= 3 for: 5s labels: severity: critical annotations: description: \"The syncing of volumes is exceeding three times the scheduled snapshot interval, or the volumes have been recently protected. (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }})\" alert_type: \"DisasterRecovery\"", "alert: VolumeSynchronizationDelay expr: ramen_rpo_difference > 2 and ramen_rpo_difference < 3 for: 5s labels: severity: warning annotations: description: \"The syncing of volumes is exceeding two times the scheduled snapshot interval, or the volumes have been recently protected. (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }})\" alert_type: \"DisasterRecovery\"", "alert: WorkloadUnprotected expr: ramen_workload_protection_status == 0 for: 10m labels: severity: warning annotations: description: \"Workload is not protected for disaster recovery (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}).\" alert_type: \"DisasterRecovery\"", "oc get pvc -n <namespace>", "oc delete pvc <pvcname> -n namespace", "oc get drpc -n <namespace> -o wide", "oc scale deploy -n openshift-gitops-operator openshift-gitops-operator-controller-manager --replicas=0 oc scale statefulset -n openshift-gitops openshift-gitops-application-controller --replicas=0", "rbd mirror pool status --verbose ocs-storagecluster-cephblockpool | grep 'leader:'", "oc logs -n busybox-workloads-3-2 volsync-rsync-src-dd-io-pvc-1-p25rz", "VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local:22 ssh: Could not resolve hostname volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local: Name or service not known", "oc delete pod -l app=submariner-lighthouse-agent -n submariner-operator", "oc get placementdecision -n openshift-gitops --selector cluster.open-cluster-management.io/placement=<placement-name>", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: PlacementDecision metadata: labels: cluster.open-cluster-management.io/decision-group-index: \"1\" # Typically one higher than the same value in the esisting PlacementDecision determined at step (2) cluster.open-cluster-management.io/decision-group-name: \"\" cluster.open-cluster-management.io/placement: cephfs-appset-busybox10-placement name: <placemen-name>-decision-<n> # <n> should be one higher than the existing PlacementDecision as determined in step (2) namespace: openshift-gitops", "decision-status.yaml: status: decisions: - clusterName: <managedcluster-name-to-clean-up> # This would be the cluster from where the workload was failed over, NOT the current workload cluster reason: FailoverCleanup", "oc patch placementdecision -n openshift-gitops <placemen-name>-decision-<n> --patch-file=decision-status.yaml --subresource=status --type=merge", "oc get application -n openshift-gitops <applicationset-name>-<managedcluster-name-to-clean-up>", "oc delete placementdecision -n openshift-gitops <placemen-name>-decision-<n>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index
Preface
Preface Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/pr01
Storage
Storage Red Hat OpenShift Service on AWS 4 Configuring storage for Red Hat OpenShift Service on AWS clusters Red Hat OpenShift Documentation Team
[ "apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: \"2Gi\" 1 limits: ephemeral-storage: \"4Gi\" 2 volumeMounts: - name: ephemeral mountPath: \"/tmp\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: \"2Gi\" limits: ephemeral-storage: \"4Gi\" volumeMounts: - name: ephemeral mountPath: \"/tmp\" volumes: - name: ephemeral emptyDir: {}", "df -h /var/lib", "Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /", "oc delete pv <pv-name>", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:", "oc get pv <pv-name> -o jsonpath='{.spec.claimRef.name}'", "oc get pv <pv-name> -o json | jq '.status.lastPhaseTransitionTime' 1", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3", "apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi", "apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3", "securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1", "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1", "patch clustercsidriver USDDRIVERNAME --type=merge -p \"{\\\"spec\\\":{\\\"storageClassState\\\":\\\"USD{STATE}\\\"}}\" 1", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:DescribeAccessPoints\", \"elasticfilesystem:DescribeFileSystems\", \"elasticfilesystem:DescribeMountTargets\", \"ec2:DescribeAvailabilityZones\", \"elasticfilesystem:TagResource\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:CreateAccessPoint\" ], \"Resource\": \"*\", \"Condition\": { \"StringLike\": { \"aws:RequestTag/efs.csi.aws.com/cluster\": \"true\" } } }, { \"Effect\": \"Allow\", \"Action\": \"elasticfilesystem:DeleteAccessPoint\", \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/efs.csi.aws.com/cluster\": \"true\" } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<openshift_oidc_provider>:sub\": [ 2 \"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator\", \"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa\" ] } } } ] }", "aws sts get-caller-identity --query Account --output text", "rosa describe cluster -c USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}') -o yaml | awk '/oidc_endpoint_url/ {print USD2}' | cut -d '/' -f 3,4", "ROLE_ARN=USD(aws iam create-role --role-name \"<your_cluster_name>-aws-efs-csi-operator\" --assume-role-policy-document file://<your_trust_file_name>.json --query \"Role.Arn\" --output text); echo USDROLE_ARN", "POLICY_ARN=USD(aws iam create-policy --policy-name \"<your_cluster_name>-aws-efs-csi\" --policy-document file://<your_policy_file_name>.json --query 'Policy.Arn' --output text); echo USDPOLICY_ARN", "aws iam attach-role-policy --role-name \"<your_cluster_name>-aws-efs-csi-operator\" --policy-arn USDPOLICY_ARN", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi", "apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3", "spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10", "oc edit clustercsidriver efs.csi.aws.com", "spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10", "oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created", "oc get clustercsidriver efs.csi.aws.com -o yaml", "oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition", "kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi", "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/storage/index
Part VI. Monitoring Performance
Part VI. Monitoring Performance Developers profile programs to focus attention on the areas of the program that have the largest impact on performance. The types of data collected include what section of the program consumes the most processor time, and where memory is allocated. Profiling collects data from the actual program execution. Thus, the quality of the data collected is influenced by the actual tasks being performed by the program. The tasks performed during profiling should be representative of actual use; this ensures that problems arising from realistic use of the program are addressed during development. Red Hat Enterprise Linux includes a number of different tools ( Valgrind , OProfile , perf , and SystemTap ) to collect profiling data. Each tool is suitable for performing specific types of profile runs, as described in the following sections.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/monitoring-performance
1.2.2. Creating a New Repository
1.2.2. Creating a New Repository A Subversion repository is a central place to store files and directories that are under revision control, as well as additional data such as a complete history of changes or information about who made those changes and when. A typical Subversion repository stores multiple projects in separate subdirectories. When publicly accessible, it allows several developers to create a working copy of any of the subdirectories, make changes, and share these changes with others by committing them back to the repository. Initializing an Empty Repository To create a new, empty Subversion repository in a directory of your choice, run the following command: svnadmin create path Note that path is an absolute or relative path to the directory in which you want to store the repository (for example, /var/svn/ ). If the directory does not exist, svnadmin create creates it for you. Example 1.4. Initializing a new Subversion repository To create an empty Subversion repository in the ~/svn/ directory, type: Importing Data to a Repository To put an existing project under revision control, run the following command: svn import local_path svn_repository / remote_path [ -m " commit message " ] Note that local_path is an absolute or relative path to the directory in which you keep the project (use . for the current working directory), svn_repository is a URL of the Subversion repository, and remote_path is the target directory in the Subversion repository (for example, project/trunk ). Example 1.5. Importing a project to a Subversion repository Imagine that the directory with your project has the following contents: Also imagine that you have an empty Subversion repository in ~/svn/ (in this example, /home/john/svn/ ). To import the project under project/trunk in this repository, type:
[ "~]USD svnadmin create svn", "~]USD ls myproject AUTHORS doc INSTALL LICENSE Makefile README src TODO", "~]USD svn import myproject file:///home/john/svn/project/trunk -m \"Initial import.\" Adding project/AUTHORS Adding project/doc Adding project/doc/index.html Adding project/INSTALL Adding project/src" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-revision_control_systems-svn-new
Chapter 21. Recording and analyzing performance profiles with perf
Chapter 21. Recording and analyzing performance profiles with perf The perf tool allows you to record performance data and analyze it at a later time. Prerequisites You have the perf user space tool installed as described in Installing perf . 21.1. The purpose of perf record The perf record command samples performance data and stores it in a file, perf.data , which can be read and visualized with other perf commands. perf.data is generated in the current directory and can be accessed at a later time, possibly on a different machine. If you do not specify a command for perf record to record during, it will record until you manually stop the process by pressing Ctrl+C . You can attach perf record to specific processes by passing the -p option followed by one or more process IDs. You can run perf record without root access, however, doing so will only sample performance data in the user space. In the default mode, perf record uses CPU cycles as the sampling event and operates in per-thread mode with inherit mode enabled. 21.2. Recording a performance profile without root access You can use perf record without root access to sample and record performance data in the user-space only. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record the performance data: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Additional resources perf-record(1) man page on your system 21.3. Recording a performance profile with root access You can use perf record with root access to sample and record performance data in both the user-space and the kernel-space simultaneously. Prerequisites You have the perf user space tool installed as described in Installing perf . You have root access. Procedure Sample and record the performance data: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Additional resources perf-record(1) man page on your system 21.4. Recording a performance profile in per-CPU mode You can use perf record in per-CPU mode to sample and record performance data in both and user-space and the kernel-space simultaneously across all threads on a monitored CPU. By default, per-CPU mode monitors all online CPUs. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record the performance data: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Additional resources perf-record(1) man page on your system 21.5. Capturing call graph data with perf record You can configure the perf record tool so that it records which function is calling other functions in the performance profile. This helps to identify a bottleneck if several processes are calling the same function. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record performance data with the --call-graph option: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Replace method with one of the following unwinding methods: fp Uses the frame pointer method. Depending on compiler optimization, such as with binaries built with the GCC option --fomit-frame-pointer , this may not be able to unwind the stack. dwarf Uses DWARF Call Frame Information to unwind the stack. lbr Uses the last branch record hardware on Intel processors. Additional resources perf-record(1) man page on your system 21.6. Analyzing perf.data with perf report You can use perf report to display and analyze a perf.data file. Prerequisites You have the perf user space tool installed as described in Installing perf . There is a perf.data file in the current directory. If the perf.data file was created with root access, you need to run perf report with root access too. Procedure Display the contents of the perf.data file for further analysis: This command displays output similar to the following: Additional resources perf-report(1) man page on your system 21.7. Interpretation of perf report output The table displayed by running the perf report command sorts the data into several columns: The 'Overhead' column Indicates what percentage of overall samples were collected in that particular function. The 'Command' column Tells you which process the samples were collected from. The 'Shared Object' column Displays the name of the ELF image where the samples come from (the name [kernel.kallsyms] is used when the samples come from the kernel). The 'Symbol' column Displays the function name or symbol. In default mode, the functions are sorted in descending order with those with the highest overhead displayed first. 21.8. Generating a perf.data file that is readable on a different device You can use the perf tool to record performance data into a perf.data file to be analyzed on a different device. Prerequisites You have the perf user space tool installed as described in Installing perf . The kernel debuginfo package is installed. For more information, see Getting debuginfo packages for an application or library using GDB. Procedure Capture performance data you are interested in investigating further: This example would generate a perf.data over the entire system for a period of seconds seconds as dictated by the use of the sleep command. It would also capture call graph data using the frame pointer method. Generate an archive file containing debug symbols of the recorded data: Verification Verify that the archive file has been generated in your current active directory: The output will display every file in your current directory that begins with perf.data . The archive file will be named either: or Additional resources Recording and analyzing performance profiles with perf Capturing call graph data with perf record 21.9. Analyzing a perf.data file that was created on a different device You can use the perf tool to analyze a perf.data file that was generated on a different device. Prerequisites You have the perf user space tool installed as described in Installing perf . A perf.data file and associated archive file generated on a different device are present on the current device being used. Procedure Copy both the perf.data file and the archive file into your current active directory. Extract the archive file into ~/.debug : Note The archive file might also be named perf.data.tar.gz . Open the perf.data file for further analysis: 21.10. Why perf displays some function names as raw function addresses For kernel functions, perf uses the information from the /proc/kallsyms file to map the samples to their respective function names or symbols. For functions executed in the user space, however, you might see raw function addresses because the binary is stripped. The debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information turned on (the -g option in GCC) to display the function names or symbols in such a situation. Note It is not necessary to re-run the perf record command after installing the debuginfo associated with an executable. Simply re-run the perf report command. Additional Resources Enabling debugging with debugging information 21.11. Enabling debug and source repositories A standard installation of Red Hat Enterprise Linux does not enable the debug and source repositories. These repositories contain information needed to debug the system components and measure their performance. Procedure Enable the source and debug information package channels: The USD(uname -i) part is automatically replaced with a matching value for architecture of your system: Architecture name Value 64-bit Intel and AMD x86_64 64-bit ARM aarch64 IBM POWER ppc64le 64-bit IBM Z s390x 21.12. Getting debuginfo packages for an application or library using GDB Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package. Prerequisites The application or library you want to debug must be installed on the system. GDB and the debuginfo-install tool must be installed on the system. For details, see Setting up to debug applications . Repositories providing debuginfo and debugsource packages must be configured and enabled on the system. For details, see Enabling debug and source repositories . Procedure Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run. Exit GDB: type q and confirm with Enter . Run the command suggested by GDB to install the required debuginfo packages: The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Getting debuginfo packages for an application or library manually . Additional resources How can I download or install debuginfo packages for RHEL systems? (Red Hat Knowledgebase)
[ "perf record command", "perf record command", "perf record -a command", "perf record --call-graph method command", "perf report", "Samples: 2K of event 'cycles', Event count (approx.): 235462960 Overhead Command Shared Object Symbol 2.36% kswapd0 [kernel.kallsyms] [k] page_vma_mapped_walk 2.13% sssd_kcm libc-2.28.so [.] memset_avx2_erms 2.13% perf [kernel.kallsyms] [k] smp_call_function_single 1.53% gnome-shell libc-2.28.so [.] strcmp_avx2 1.17% gnome-shell libglib-2.0.so.0.5600.4 [.] g_hash_table_lookup 0.93% Xorg libc-2.28.so [.] memmove_avx_unaligned_erms 0.89% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_object_unref 0.87% kswapd0 [kernel.kallsyms] [k] page_referenced_one 0.86% gnome-shell libc-2.28.so [.] memmove_avx_unaligned_erms 0.83% Xorg [kernel.kallsyms] [k] alloc_vmap_area 0.63% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_alloc 0.53% gnome-shell libgirepository-1.0.so.1.0.0 [.] g_base_info_unref 0.53% gnome-shell ld-2.28.so [.] _dl_find_dso_for_object 0.49% kswapd0 [kernel.kallsyms] [k] vma_interval_tree_iter_next 0.48% gnome-shell libpthread-2.28.so [.] pthread_getspecific 0.47% gnome-shell libgirepository-1.0.so.1.0.0 [.] 0x0000000000013b1d 0.45% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_free1 0.45% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_type_check_instance_is_fundamentally_a 0.44% gnome-shell libc-2.28.so [.] malloc 0.41% swapper [kernel.kallsyms] [k] apic_timer_interrupt 0.40% gnome-shell ld-2.28.so [.] _dl_lookup_symbol_x 0.39% kswapd0 [kernel.kallsyms] [k] raw_callee_save___pv_queued_spin_unlock", "perf record -a --call-graph fp sleep seconds", "perf archive", "ls perf.data*", "perf.data.tar.gz", "perf.data.tar.bz2", "mkdir -p ~/.debug tar xf perf.data.tar.bz2 -C ~/.debug", "perf report", "subscription-manager repos --enable rhel-8-for-USD(uname -i)-baseos-debug-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-baseos-source-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-appstream-debug-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-appstream-source-rpms", "gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)", "(gdb) q", "dnf debuginfo-install coreutils-8.30-6.el8.x86_64" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/recording-and-analyzing-performance-profiles-with-perf_monitoring-and-managing-system-status-and-performance
Chapter 14. Managing GNOME Shell extensions by using the command line
Chapter 14. Managing GNOME Shell extensions by using the command line The gnome-extensions utility is a command-line tool that allows you to manage GNOME Shell extensions from the terminal. It provides various commands to list, install, enable, disable, remove, and get information about extensions. Each GNOME Shell extension has a UUID (Universally Unique Identifier). You can find the UUID of an extension on its GNOME Shell Extensions website page. Procedure To list the installed GNOME Shell extensions, use: To install a GNOME Shell extension, use: To enable a GNOME Shell extension, use: To show information about a GNOME Shell extension, use: To disable a GNOME Shell extension, use: To remove a GNOME Shell extension, use: Replace the <UUIDs> with the unique identifier assigned to the GNOME Shell extension you want to install. Additional resources The gnome-extensions --help page.
[ "gnome-extensions list", "gnome-extensions install <UUID>", "gnome-extensions enable <UUID>", "gnome-extensions info <UUID>", "gnome-extensions disable <UUID>", "gnome-extensions uninstall <UUID>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/managing-gnome-shell-extensions-via-command-line_customizing-the-gnome-desktop-environment
Chapter 5. Postinstallation configuration
Chapter 5. Postinstallation configuration 5.1. Postinstallation configuration The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment: Node placement rules for OpenShift Virtualization Operators, workloads, and controllers Network configuration : Installing the Kubernetes NMState and SR-IOV Operators Configuring a Linux bridge network for external access to virtual machines (VMs) Configuring a dedicated secondary network for live migration Configuring an SR-IOV network Enabling the creation of load balancer services by using the OpenShift Container Platform web console Storage configuration : Defining a default storage class for the Container Storage Interface (CSI) Configuring local storage by using the Hostpath Provisioner (HPP) 5.2. Specifying nodes for OpenShift Virtualization components The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules. Note You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads. 5.2.1. About node placement rules for OpenShift Virtualization components You can use node placement rules for the following tasks: Deploy virtual machines only on nodes intended for virtualization workloads. Deploy Operators only on infrastructure nodes. Maintain separation between workloads. Depending on the object, you can use one or more of the following rule types: nodeSelector Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied. tolerations Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint. 5.2.2. Applying node placement rules You can apply node placement rules by editing a Subscription , HyperConverged , or HostPathProvisioner object using the command line. Prerequisites The oc CLI tool is installed. You are logged in with cluster administrator permissions. Procedure Edit the object in your default editor by running the following command: USD oc edit <resource_type> <resource_name> -n {CNVNamespace} Save the file to apply the changes. 5.2.3. Node placement rule examples You can specify node placement rules for a OpenShift Virtualization component by editing a Subscription , HyperConverged , or HostPathProvisioner object. 5.2.3.1. Subscription object node placement rule examples To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation. Currently, you cannot configure node placement rules for the Subscription object by using the web console. The Subscription object does not support the affinity node pplacement rule. Example Subscription object with nodeSelector rule apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: "stable" config: nodeSelector: example.io/example-infra-key: example-infra-value 1 1 OLM deploys the OpenShift Virtualization Operators on nodes labeled example.io/example-infra-key = example-infra-value . Example Subscription object with tolerations rule apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: "stable" config: tolerations: - key: "key" operator: "Equal" value: "virtualization" 1 effect: "NoSchedule" 1 OLM deploys OpenShift Virtualization Operators on nodes labeled key = virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled on these nodes. 5.2.3.2. HyperConverged object node placement rule example To specify the nodes where OpenShift Virtualization deploys its components, you can edit the nodePlacement object in the HyperConverged custom resource (CR) file that you create during OpenShift Virtualization installation. Example HyperConverged object with nodeSelector rule apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2 1 Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-infra-value . 2 workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . Example HyperConverged object with affinity rule apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3 1 Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-value . 2 workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . 3 Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled. Example HyperConverged object with tolerations rule apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" 1 Nodes reserved for OpenShift Virtualization components are labeled with the key = virtualization:NoSchedule taint. Only pods with matching tolerations are scheduled on reserved nodes. 5.2.3.3. HostPathProvisioner object node placement rule example You can edit the HostPathProvisioner object directly or by using the web console. Warning You must schedule the hostpath provisioner and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines. After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM. You can configure node placement rules by specifying nodeSelector , affinity , or tolerations for the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner. Example HostPathProvisioner object with nodeSelector rule apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1 1 Workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . 5.2.4. Additional resources Specifying nodes for virtual machines Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 5.3. Postinstallation network configuration By default, OpenShift Virtualization is installed with a single, internal pod network. After you install OpenShift Virtualization, you can install networking Operators and configure additional networks. 5.3.1. Installing networking Operators You must install the Kubernetes NMState Operator to configure a Linux bridge network for live migration or external access to virtual machines (VMs). For installation instructions, see Installing the Kubernetes NMState Operator by using the web console . You can install the SR-IOV Operator to manage SR-IOV network devices and network attachments. For installation instructions, see Installing the SR-IOV Network Operator . You can add the About MetalLB and the MetalLB Operator to manage the lifecycle for an instance of MetalLB on your cluster. For installation instructions, see Installing the MetalLB Operator from the OperatorHub using the web console . 5.3.2. Configuring a Linux bridge network After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs). 5.3.2.1. Creating a Linux bridge NNCP You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 5.3.2.2. Creating a Linux bridge NAD by using the web console You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console. A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Procedure In the web console, click Networking NetworkAttachmentDefinitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Select CNV Linux bridge from the Network Type list. Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . steps Attaching a virtual machine (VM) to a Linux bridge network 5.3.3. Configuring a network for live migration After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 5.3.3.1. Configuring a dedicated secondary network for live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. Each node has at least two Network Interface Cards (NICs). The NICs for live migration are connected to the same VLAN. Procedure Create a NetworkAttachmentDefinition manifest according to the following example: Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }' 1 Specify the name of the NetworkAttachmentDefinition object. 2 Specify the name of the NIC to be used for live migration. 3 Specify the name of the CNI plugin that provides the network for the NAD. 4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR: Example HyperConverged manifest apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. USD oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 5.3.3.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the OpenShift Container Platform web console. Prerequisites You configured a Multus network for live migration. You created a network attachment definition for the network. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 5.3.4. Configuring an SR-IOV network After you install the SR-IOV Operator, you can configure an SR-IOV network. 5.3.4.1. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. Reboot only happens in the following cases: With Mellanox NICs ( mlx5 driver) a node reboot happens every time the number of virtual functions (VFs) increase on a physical function (PF). With Intel NICs, a reboot only happens if the kernel parameters do not include intel_iommu=on and iommu=pt . It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' steps Attaching a virtual machine (VM) to an SR-IOV network 5.3.5. Enabling load balancer service creation by using the web console You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You have configured a load balancer for the cluster. You are logged in as a user with the cluster-admin role. You created a network attachment definition for the network. Procedure Navigate to Virtualization Overview . On the Settings tab, click Cluster . Expand General settings and SSH configuration . Set SSH over LoadBalancer service to on. 5.4. Postinstallation storage configuration The following storage configuration tasks are mandatory: You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates. You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class. Optional: You can configure local storage by using the hostpath provisioner (HPP). See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates. 5.4.1. Configuring local storage by using the HPP When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner. The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR). Important HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable. 5.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver. When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. Procedure Create a storageclass_csi.yaml file to define the storage class: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3 1 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the default value is Delete . 2 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. 3 Specify the name of the storage pool defined in the HPP CR. Save the file and exit. Create the StorageClass object by running the following command: USD oc create -f storageclass_csi.yaml 5.5. Configuring higher VM workload density You can increase the number of virtual machines (VMs) on nodes by overcommitting memory (RAM). Increasing VM workload density can be useful in the following situations: You have many similar workloads. You have underused workloads. Note Memory overcommitment can lower workload performance on a highly utilized system. 5.5.1. Using wasp-agent to increase VM workload density The wasp-agent component facilitates memory overcommitment by assigning swap resources to worker nodes. It also manages pod evictions when nodes are at risk due to high swap I/O traffic or high utilization. Important Swap resources can be only assigned to virtual machine workloads (VM pods) of the Burstable Quality of Service (QoS) class. VM pods of the Guaranteed QoS class and pods of any QoS class that do not belong to VMs cannot swap resources. For descriptions of QoS classes, see Configure Quality of Service for Pods (Kubernetes documentation). Using spec.domain.resources.requests.memory in the VM manifest disables the memory overcommit configuration. Use spec.domain.memory.guest instead. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged into the cluster with the cluster-admin role. A memory overcommit ratio is defined. The node belongs to a worker pool. Note The wasp-agent component deploys an Open Container Initiative (OCI) hook to enable swap usage for containers on the node level. The low-level nature requires the DaemonSet object to be privileged. Procedure Configure the kubelet service to permit swap usage: Create or edit a KubeletConfig file with the parameters shown in the following example: Example of a KubeletConfig file apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' # MCP #machine.openshift.io/cluster-api-machine-role: worker # machine #node-role.kubernetes.io/worker: '' # node kubeletConfig: failSwapOn: false Wait for the worker nodes to sync with the new configuration by running the following command: USD oc wait mcp worker --for condition=Updated=True --timeout=-1s Provision swap by creating a MachineConfig object. For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 90-worker-swap spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Provision and enable swap ConditionFirstBoot=no ConditionPathExists=!/var/tmp/swapfile [Service] Type=oneshot Environment=SWAP_SIZE_MB=5000 ExecStart=/bin/sh -c "sudo dd if=/dev/zero of=/var/tmp/swapfile count=USD{SWAP_SIZE_MB} bs=1M && \ sudo chmod 600 /var/tmp/swapfile && \ sudo mkswap /var/tmp/swapfile && \ sudo swapon /var/tmp/swapfile && \ free -h" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: swap-provision.service - contents: | [Unit] Description=Restrict swap for system slice ConditionFirstBoot=no [Service] Type=oneshot ExecStart=/bin/sh -c "sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\"/ 50ms\"" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: cgroup-system-slice-config.service To have enough swap space for the worst-case scenario, make sure to have at least as much swap space provisioned as overcommitted RAM. Calculate the amount of swap space to be provisioned on a node by using the following formula: NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1) Example NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1) = 16 GB * (1.5 - 1) = 16 GB * (0.5) = 8 GB Create a privileged service account by running the following commands: USD oc adm new-project wasp USD oc create sa -n wasp wasp USD oc create clusterrolebinding wasp --clusterrole=cluster-admin --serviceaccount=wasp:wasp USD oc adm policy add-scc-to-user -n wasp privileged -z wasp Wait for the worker nodes to sync with the new configuration by running the following command: USD oc wait mcp worker --for condition=Updated=True --timeout=-1s Determine the pull URL for the wasp agent image by running the following command: USD oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(".*wasp-agent.*")) | .image' Deploy wasp-agent by creating a DaemonSet object as shown in the following example: kind: DaemonSet apiVersion: apps/v1 metadata: name: wasp-agent namespace: wasp labels: app: wasp tier: node spec: selector: matchLabels: name: wasp template: metadata: annotations: description: >- Configures swap for workloads labels: name: wasp spec: containers: - env: - name: SWAP_UTILIZATION_THRESHOLD_FACTOR value: "0.8" - name: MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND value: "1000000000" - name: MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND value: "1000000000" - name: AVERAGE_WINDOW_SIZE_SECONDS value: "30" - name: VERBOSITY value: "1" - name: FSROOT value: /host - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName image: >- quay.io/openshift-virtualization/wasp-agent:v4.17 1 imagePullPolicy: Always name: wasp-agent resources: requests: cpu: 100m memory: 50M securityContext: privileged: true volumeMounts: - mountPath: /host name: host - mountPath: /rootfs name: rootfs hostPID: true hostUsers: true priorityClassName: system-node-critical serviceAccountName: wasp terminationGracePeriodSeconds: 5 volumes: - hostPath: path: / name: host - hostPath: path: / name: rootfs updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% maxSurge: 0 1 Replace the image value with the image URL from the step. Deploy alerting rules by creating a PrometheusRule object. For example: apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: tier: node wasp.io: "" name: wasp-rules namespace: wasp spec: groups: - name: alerts.rules rules: - alert: NodeHighSwapActivity annotations: description: High swap activity detected at {{ USDlabels.instance }}. The rate of swap out and swap in exceeds 200 in both operations in the last minute. This could indicate memory pressure and may affect system performance. runbook_url: https://github.com/openshift-virtualization/wasp-agent/tree/main/docs/runbooks/NodeHighSwapActivity.md summary: High swap activity detected at {{ USDlabels.instance }}. expr: rate(node_vmstat_pswpout[1m]) > 200 and rate(node_vmstat_pswpin[1m]) > 200 for: 1m labels: kubernetes_operator_component: kubevirt kubernetes_operator_part_of: kubevirt operator_health_impact: warning severity: warning Add the cluster-monitoring label to the wasp namespace by running the following command: USD oc label namespace wasp openshift.io/cluster-monitoring="true" Enable memory overcommitment in OpenShift Virtualization by using the web console or the CLI. Web console In the OpenShift Container Platform web console, go to Virtualization Overview Settings General settings Memory density . Set Enable memory density to on. CLI Configure your OpenShift Virtualization to enable higher memory density and set the overcommit rate: USD oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' -p='[ \ { \ "op": "replace", \ "path": "/spec/higherWorkloadDensity/memoryOvercommitPercentage", \ "value": 150 \ } \ ]' Successful output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Verification To verify the deployment of wasp-agent , run the following command: USD oc rollout status ds wasp-agent -n wasp If the deployment is successful, the following message is displayed: Example output daemon set "wasp-agent" successfully rolled out To verify that swap is correctly provisioned, complete the following steps: View a list of worker nodes by running the following command: USD oc get nodes -l node-role.kubernetes.io/worker Select a node from the list and display its memory usage by running the following command: USD oc debug node/<selected_node> -- free -m 1 1 Replace <selected_node> with the node name. If swap is provisioned, an amount greater than zero is displayed in the Swap: row. Table 5.1. Example output total used free shared buff/cache available Mem: 31846 23155 1044 6014 14483 8690 Swap: 8191 2337 5854 Verify the OpenShift Virtualization memory overcommitment configuration by running the following command: USD oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{"\n"}' Example output {"memoryOvercommitPercentage":150} The returned value must match the value you had previously configured. 5.5.2. Pod eviction conditions used by wasp-agent The wasp agent manages pod eviction when the system is heavily loaded and nodes are at risk. Eviction is triggered if one of the following conditions is met: High swap I/O traffic This condition is met when swap-related I/O traffic is excessively high. Condition averageSwapInPerSecond > maxAverageSwapInPagesPerSecond && averageSwapOutPerSecond > maxAverageSwapOutPagesPerSecond By default, maxAverageSwapInPagesPerSecond and maxAverageSwapOutPagesPerSecond are set to 1000 pages. The default time interval for calculating the average is 30 seconds. High swap utilization This condition is met when swap utilization is excessively high, causing the current virtual memory usage to exceed the factored threshold. The NODE_SWAP_SPACE setting in your MachineConfig object can impact this condition. Condition nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory x thresholdFactor 5.5.2.1. Environment variables You can use the following environment variables to adjust the values used to calculate eviction conditions: Environment variable Function MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND Sets the value of maxAverageSwapInPagesPerSecond . MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND Sets the value of maxAverageSwapOutPagesPerSecond . SWAP_UTILIZATION_THRESHOLD_FACTOR Sets the thresholdFactor value used to calculate high swap utilization. AVERAGE_WINDOW_SIZE_SECONDS Sets the time interval for calculating the average swap usage.
[ "oc edit <resource_type> <resource_name> -n {CNVNamespace}", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" 1 effect: \"NoSchedule\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' # MCP #machine.openshift.io/cluster-api-machine-role: worker # machine #node-role.kubernetes.io/worker: '' # node kubeletConfig: failSwapOn: false", "oc wait mcp worker --for condition=Updated=True --timeout=-1s", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 90-worker-swap spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Provision and enable swap ConditionFirstBoot=no ConditionPathExists=!/var/tmp/swapfile [Service] Type=oneshot Environment=SWAP_SIZE_MB=5000 ExecStart=/bin/sh -c \"sudo dd if=/dev/zero of=/var/tmp/swapfile count=USD{SWAP_SIZE_MB} bs=1M && sudo chmod 600 /var/tmp/swapfile && sudo mkswap /var/tmp/swapfile && sudo swapon /var/tmp/swapfile && free -h\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: swap-provision.service - contents: | [Unit] Description=Restrict swap for system slice ConditionFirstBoot=no [Service] Type=oneshot ExecStart=/bin/sh -c \"sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\\\"/ 50ms\\\"\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: cgroup-system-slice-config.service", "NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1)", "NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1) = 16 GB * (1.5 - 1) = 16 GB * (0.5) = 8 GB", "oc adm new-project wasp", "oc create sa -n wasp wasp", "oc create clusterrolebinding wasp --clusterrole=cluster-admin --serviceaccount=wasp:wasp", "oc adm policy add-scc-to-user -n wasp privileged -z wasp", "oc wait mcp worker --for condition=Updated=True --timeout=-1s", "oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(\".*wasp-agent.*\")) | .image'", "kind: DaemonSet apiVersion: apps/v1 metadata: name: wasp-agent namespace: wasp labels: app: wasp tier: node spec: selector: matchLabels: name: wasp template: metadata: annotations: description: >- Configures swap for workloads labels: name: wasp spec: containers: - env: - name: SWAP_UTILIZATION_THRESHOLD_FACTOR value: \"0.8\" - name: MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND value: \"1000000000\" - name: MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND value: \"1000000000\" - name: AVERAGE_WINDOW_SIZE_SECONDS value: \"30\" - name: VERBOSITY value: \"1\" - name: FSROOT value: /host - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName image: >- quay.io/openshift-virtualization/wasp-agent:v4.17 1 imagePullPolicy: Always name: wasp-agent resources: requests: cpu: 100m memory: 50M securityContext: privileged: true volumeMounts: - mountPath: /host name: host - mountPath: /rootfs name: rootfs hostPID: true hostUsers: true priorityClassName: system-node-critical serviceAccountName: wasp terminationGracePeriodSeconds: 5 volumes: - hostPath: path: / name: host - hostPath: path: / name: rootfs updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% maxSurge: 0", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: tier: node wasp.io: \"\" name: wasp-rules namespace: wasp spec: groups: - name: alerts.rules rules: - alert: NodeHighSwapActivity annotations: description: High swap activity detected at {{ USDlabels.instance }}. The rate of swap out and swap in exceeds 200 in both operations in the last minute. This could indicate memory pressure and may affect system performance. runbook_url: https://github.com/openshift-virtualization/wasp-agent/tree/main/docs/runbooks/NodeHighSwapActivity.md summary: High swap activity detected at {{ USDlabels.instance }}. expr: rate(node_vmstat_pswpout[1m]) > 200 and rate(node_vmstat_pswpin[1m]) > 200 for: 1m labels: kubernetes_operator_component: kubevirt kubernetes_operator_part_of: kubevirt operator_health_impact: warning severity: warning", "oc label namespace wasp openshift.io/cluster-monitoring=\"true\"", "oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' -p='[ { \"op\": \"replace\", \"path\": \"/spec/higherWorkloadDensity/memoryOvercommitPercentage\", \"value\": 150 } ]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc rollout status ds wasp-agent -n wasp", "daemon set \"wasp-agent\" successfully rolled out", "oc get nodes -l node-role.kubernetes.io/worker", "oc debug node/<selected_node> -- free -m 1", "oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{\"\\n\"}'", "{\"memoryOvercommitPercentage\":150}", "averageSwapInPerSecond > maxAverageSwapInPagesPerSecond && averageSwapOutPerSecond > maxAverageSwapOutPagesPerSecond", "nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory x thresholdFactor" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/virtualization/postinstallation-configuration
14.4.2. Using the scp Utility
14.4.2. Using the scp Utility scp can be used to transfer files between machines over a secure, encrypted connection. In its design, it is very similar to rcp . To transfer a local file to a remote system, use a command in the following form: For example, if you want to transfer taglist.vim to a remote machine named penguin.example.com , type the following at a shell prompt: Multiple files can be specified at once. To transfer the contents of .vim/plugin/ to the same directory on the remote machine penguin.example.com , type the following command: To transfer a remote file to the local system, use the following syntax: For instance, to download the .vimrc configuration file from the remote machine, type:
[ "scp localfile username @ hostname : remotefile", "~]USD scp taglist.vim [email protected]:.vim/plugin/taglist.vim [email protected]'s password: taglist.vim 100% 144KB 144.5KB/s 00:00", "~]USD scp .vim/plugin/* [email protected]:.vim/plugin/ [email protected]'s password: closetag.vim 100% 13KB 12.6KB/s 00:00 snippetsEmu.vim 100% 33KB 33.1KB/s 00:00 taglist.vim 100% 144KB 144.5KB/s 00:00", "scp username @ hostname : remotefile localfile", "~]USD scp [email protected]:.vimrc .vimrc [email protected]'s password: .vimrc 100% 2233 2.2KB/s 00:00" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-clients-scp
5.346. vino
5.346. vino 5.346.1. RHSA-2013:0169 - Moderate: vino security update An updated vino package that fixes several security issues is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Vino is a Virtual Network Computing (VNC) server for GNOME. It allows remote users to connect to a running GNOME session using VNC. Security Fixes CVE-2012-4429 It was found that Vino transmitted all clipboard activity on the system running Vino to all clients connected to port 5900, even those who had not authenticated. A remote attacker who is able to access port 5900 on a system running Vino could use this flaw to read clipboard data without authenticating. CVE-2011-0904 , CVE-2011-0905 Two out-of-bounds memory read flaws were found in the way Vino processed client framebuffer requests in certain encodings. An authenticated client could use these flaws to send a specially-crafted request to Vino, causing it to crash. CVE-2011-1164 In certain circumstances, the vino-preferences dialog box incorrectly indicated that Vino was only accessible from the local network. This could confuse a user into believing connections from external networks are not allowed (even when they are allowed). With this update, vino-preferences no longer displays connectivity and reachable information. CVE-2011-1165 There was no warning that Universal Plug and Play (UPnP) was used to open ports on a user's network router when the "Configure network automatically to accept connections" option was enabled (it is disabled by default) in the Vino preferences. This update changes the option's description to avoid the risk of a UPnP router configuration change without the user's consent. All Vino users should upgrade to this updated package, which contains backported patches to resolve these issues. The GNOME session must be restarted (log out, then log back in) for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/vino
Eclipse Temurin 8.0.352 release notes
Eclipse Temurin 8.0.352 release notes Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.352_release_notes/index
Chapter 2. Shutting down the cluster gracefully
Chapter 2. Shutting down the cluster gracefully This document describes the process to gracefully shut down your cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs. 2.1. Prerequisites Take an etcd backup prior to shutting down the cluster. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues when restarting the cluster. For example, the following conditions can cause the restarted cluster to malfunction: etcd data corruption during shutdown Node failure due to hardware Network connectivity issues If your cluster fails to recover, follow the steps to restore to a cluster state . 2.2. Shutting down the cluster You can shut down your cluster in a graceful manner so that it can be restarted at a later date. Note You can shut down a cluster until a year from the installation date and expect it to restart gracefully. After a year from the installation date, the cluster certificates expire. However, you might need to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates when the cluster restarts. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Procedure If you are shutting the cluster down for an extended period, determine the date on which certificates expire and run the following command: USD oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}' Example output 2022-08-05T14:37:50Zuser@user:~ USD 1 1 To ensure that the cluster can restart gracefully, plan to restart it on or before the specified date. As the cluster restarts, the process might require you to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates. Mark all the nodes in the cluster as unschedulable. You can do this from your cloud provider's web console, or by running the following loop: USD for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done Example output ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned Evacuate the pods using the following method: USD for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done Shut down all of the nodes in the cluster. You can do this from the web console for your cloud provider web console, or by running the following loop. Shutting down the nodes by using one of these methods allows pods to terminate gracefully, which reduces the chance for data corruption. Note Ensure that the control plane node with the API VIP assigned is the last node processed in the loop. Otherwise, the shutdown command fails. USD for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1 1 -h 1 indicates how long, in minutes, this process lasts before the control plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to -h 10 or longer to make sure all the compute nodes have time to shut down first. Example output Starting pod/ip-10-0-130-169us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod ... Starting pod/ip-10-0-150-116us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel. Note It is not necessary to drain control plane nodes of the standard pods that ship with OpenShift Container Platform prior to shutdown. Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart. Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor's documentation before doing so. Important If you deployed your cluster on a cloud-provider platform, do not shut down, suspend, or delete the associated cloud resources. If you delete the cloud resources of a suspended virtual machine, OpenShift Container Platform might not restore successfully. 2.3. Additional resources Restarting the cluster gracefully
[ "oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}'", "2022-08-05T14:37:50Zuser@user:~ USD 1", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done", "ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned", "for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1", "Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/backup_and_restore/graceful-shutdown-cluster
Preface
Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 6.10 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release, as well as known problems. The Technical Notes document provides a list of notable bug fixes, all currently available Technology Previews, deprecated functionality, and other information. Capabilities and limits of Red Hat Enterprise Linux 6 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . Packages distributed with this release are listed in Red Hat Enterprise Linux 6 Package Manifest . Migration to Red Hat Enterprise Linux 7 is documented in the Migration Planning Guide. For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/pref-red_hat_enterprise_linux-6.10_release_notes-preface
Chapter 4. Deploying the DCN control plane
Chapter 4. Deploying the DCN control plane To deploy Red Hat OpenStack Services on OpenShift (RHOSO) with a distributed compute node (DCN) architecture, you start with an installation of Red Hat OpenShift Container Platform (RHOCP) and install the RHOSO control plane. For details regarding preparation for the RHOSO control plane, see Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift . Before you deploy the control plane, you must also configure the RHOCP networks. A DCN deployment of RHOSO requires the management of a larger number subnets. The subnets that you use are specific to your environment. This document uses the following configuration in each of its examples. Central location (AZ-0) AZ-1 AZ-2 Control plane 192.168.122.0/24 192.168.133.0/24 192.168.144.0/24 External 10.0.0.0/24 10.0.0.0/24 10.0.0.0/24 Internal 172.17.0.0/24 172.17.10.0/24 172.17.20.0/24 Storage 172.18.0.0/24 172.18.10.0/24 172.18.20.0/24 Tenant 172.19.0.0/24 172.19.10.0/24 172.19.20.0/24 Storage Management 172.20.0.0/24 172.20.10.0/24 172.20.20.0/24 4.1. Creating a spine-leaf network topology for DCN The geographically distributed nodes in a distributed compute node (DCN) architecture are interconnected using a routed spine-leaf network topology. You must configure the following CRs: NodeNetworkConfigurationPolicy Use the NodeNetworkConfigurationPolicy CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster. NetworkAttachmentDefinition Use the NetworkAttachmentDefinition CR to attach service pods to the isolated networks, where needed. L2Advertisement Use the L2Advertisement resource to define how the Virtual IPs (VIPs) are announced. IPAddressPool Use the IPAddressPool resource to configure which IPs can be used as VIPs. NetConfig Use the NetConfig CR to specify the subnets for the data plane networks. OpenStackControlPlane Use the OpenStackControlPlane to define and configure OpenStack services on OpenShift. 4.2. Preparing DCN networking When you deploy a distributed compute node architecture, you deploy the central location control plane first. The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. Prerequisites The OpenStack Operator is installed Procedure Create a NodeNetworkConfigurationPolicy ( nncp ) CR definition file on your workstation for each worker node in the RHOCP cluster that hosts OpenStack services. In each nncp CR file, configure the interfaces for each isolated network. Each service interface must have its own unique address: Add the route-rules attribute and the route configuration to networks in each remote location to each nncp CR file: Note Each service network routes to the same network at each remote location. For example, the internapi network (172.17.0.0/24) has a route to the internalapi network at each remote location (172.17.10.0/24 and 172.17.20.0/24) through a local router at 172.17.0.1. Create the nncp CRs in the cluster: Create a NetworkAttachmentDefinition CR definition file for each network. Include routes to each remote location to the networks of the same function. For example, the internalapi NetworkAttachmentDefinition specifies its own subnet range as well as routes to the internalapi networks at remote sites. Create a NetworkAttachmentDefinition CR definition file for the internalapi network: Create a NetworkAttachmentDefinition CR definition file for the control network: Create a NetworkAttachmentDefinition CR definition file for the storage network: Create a NetworkAttachmentDefinition CR definition file for the tenant network: Create the NetworkAttachmentDefinition CRs: Create a NetConfig CR definition file to define which IPs can be used as Virtual IPs (VIPs). Each network defined under the dnsDomain field, with allocationRanges for each geographic reigon. These ranges cannot overlap with the whereabouts IPAM range. Create the file with the added allocation ranges for the control plane networking similar to the following: Add an allocation range for the internalapi network: Add an allocation range for the external network: Add an allocation range for the tenant network: Add an allocation range for the storagemgmt network: Create the NetConfig CR: 4.3. Creating the DCN control plane The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload. Prerequisites The OpenStack Operator ( openstack-operator ) is installed. The RHOCP cluster is prepared for RHOSO networks. The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack ). Use the following command to check the existing network policies on the cluster: You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. Procedure Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR: Use the spec field to specify the Secret CR you create to provide secure access to your pod, and the storageClass you create for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end: Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. Add service configurations. Include service configurations for all required services: Block Storage service (cinder): Note In RHOSO 18.0.3, You must set the uniquePodNames field to a value of false to allow for the propagation of Secrets. For more information see OSPRH-11240 . Note Set the replicas field to a value of 0 . The replica count is changed and additional cinderVolume services are added after storage is configured. Set the storage_availability_zone field in the template section to az0 . All Block storage service (cinder) pods inherit this value, such as cinderBackup , cinderVolume , and so on. You can override this AZ for the cinderVolume service by specifying the backend_availability_zone . Compute service (nova): DNS service for the data plane: Galera Identity service (keystone) Image service (glance): Note You must initially set the replicas field to a value of 0 . The replica count is changed and additional glanceAPI services are added after storage is configured. Key Management service (barbican): Memcached Networking service (neutron): 1 Set the network_scheduler_driver to a value of neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler if a DHCP agent is deployed. OVN Placement service (placement) RabbitMQ Create the control plane: Additional resources Providing secure access to the Red Hat OpenStack Services on OpenShift services Creating a storage class Creating the control plane 4.4. Ceph secret key site distribution When you deploy a distributed compute node (DCN) environment with multiple Ceph backends, you create a Ceph key for each backend. For security purposes, distribute the least number of Ceph keys to each site. Create one secret for each location. Add the key for each Ceph backend to the secret for the default location. Add the the key for the default Ceph backend, as well as the local ceph backend for each additional location. For three locations, az0 , az1 , and az2 , you must have three secrets. Locations az1 and az2 each have keys for the local backend as well as the keys for az0 . Location az0 contains all Ceph back end keys. Procedure You create the required secrets after Ceph has been deployed at each edge location, and the keyring and configuration file for each has been collected. Alternatively, you can deploy each Ceph backend as needed, and update secrets with each edge deployment. Create a secret for location az0 . If you have already deployed Red Hat Ceph Storage (RHCS) at all edge sites which require storage, create a secret for az0 which contains all keyrings and conf files: If you have not deployed RHCS at all edge sites, create a secret for az0 which contains the keyring and conf file for az0 : When you deploy RHCS at the edge location at availability zone 1 ( az1 ), create a secret for location az1 which contains keyrings and conf files for the local backend, and the default backend: If needed, update the secret for the central location: When you deploy RHCS at the edge location at availability zone 2 ( az2 ) create a secret for location az2 which contains keyrings and conf files for the local backend, and the default backend: If needed, update the secret for the central location: [Optional] When you have finished creating the necessary keys, you can verify that they show up in the openstack namespace: Example output When you create an OpenStackDataPlaneNodeSet , use the appropriate key under the extraMounts field: When you create a data plane NodeSet, you must also update the OpenStackControlPlane custom resource (CR) with the secret name: Note If the CinderBackup service is a part of the deployment, then you must include it in the propagation list because it does not have the availability zone in its pod name. When you update the glanceAPIs field in the OpenStackControlPlane CR, the Image service (glance) pod name matches the extraMounts propagation instances: When you update the cinderVolumes field in the OpenStackControlPlane CR, the Block Storage service (cinder) pod names must also match the extraMounts propagation instance s:
[ "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: labels: osp/nncm-config-type: standard name: worker-0 namespace: openstack spec: desiredState: dns-resolver: config: search: [] server: - 192.168.122.1 interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: \"24\" dhcp: false enabled: true ipv6: enabled: false mtu: 1496 name: internalapi state: up type: vlan vlan: base-iface: enp7s0 id: \"20\" - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: \"24\" dhcp: false enabled: true ipv6: enabled: false mtu: 1496 name: storage state: up type: vlan vlan: base-iface: enp7s0 id: \"21\" - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: \"24\" dhcp: false enabled: true ipv6: enabled: false mtu: 1496 name: tenant state: up type: vlan vlan: base-iface: enp7s0 id: \"22\" - description: ctlplane interface mtu: 1500 name: enp7s0 state: up type: ethernet - bridge: options: stp: enabled: false port: - name: enp7s0 vlan: {} description: linux-bridge over ctlplane interface ipv4: address: - ip: 192.168.122.10 prefix-length: \"24\" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: ospbr state: up type: linux-bridge", "route-rules: config: [] routes: config: - destination: 192.168.133.0/24 next-hop-address: 192.168.122.1 next-hop-interface: ospbr table-id: 254 - destination: 192.168.144.0/24 next-hop-address: 192.168.122.1 next-hop-interface: ospbr table-id: 254 - destination: 172.17.10.0/24 next-hop-address: 172.17.0.1 next-hop-interface: internalapi table-id: 254 - destination: 172.18.10.0/24 next-hop-address: 172.18.0.1 next-hop-interface: storage table-id: 254 - destination: 172.19.10.0/24 next-hop-address: 172.19.0.1 next-hop-interface: tenant table-id: 254 - destination: 172.17.20.0/24 next-hop-address: 172.17.0.1 next-hop-interface: internalapi table-id: 254 - destination: 172.18.20.0/24 next-hop-address: 172.18.0.1 next-hop-interface: storage table-id: 254 - destination: 172.19.20.0/24 next-hop-address: 172.19.0.1 next-hop-interface: tenant table-id: 254 nodeSelector: kubernetes.io/hostname: worker-0 node-role.kubernetes.io/worker: \"\"", "oc create -f worker0-nncp.yaml oc create -f worker1-nncp.yaml oc create -f worker2-nncp.yaml", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: internalapi osp/net-attach-def-type: standard name: internalapi namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"internalapi\", \"type\": \"macvlan\", \"master\": \"internalapi\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.17.0.0/24\", \"range_start\": \"172.17.0.30\", \"range_end\": \"172.17.0.70\", \"routes\": [ { \"dst\": \"172.17.10.0/24\", \"gw\": \"172.17.0.1\" }, { \"dst\": \"172.17.20.0/24\", \"gw\": \"172.17.0.1\" } ] } }", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: ctlplane osp/net-attach-def-type: standard name: ctlplane namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"ctlplane\", \"type\": \"macvlan\", \"master\": \"ospbr\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.122.0/24\", \"range_start\": \"192.168.122.30\", \"range_end\": \"192.168.122.70\", \"routes\": [ { \"dst\": \"192.168.133.0/24\", \"gw\": \"192.168.122.1\" }, { \"dst\": \"192.168.144.0/24\", \"gw\": \"192.168.122.1\" } ] } }", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: storage osp/net-attach-def-type: standard name: storage namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"storage\", \"type\": \"macvlan\", \"master\": \"storage\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.18.0.0/24\", \"range_start\": \"172.18.0.30\", \"range_end\": \"172.18.0.70\", \"routes\": [ { \"dst\": \"172.18.10.0/24\", \"gw\": \"172.18.0.1\" }, { \"dst\": \"172.18.20.0/24\", \"gw\": \"172.18.0.1\" } ] } }", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: tenant osp/net-attach-def-type: standard name: tenant namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"tenant\", \"type\": \"macvlan\", \"master\": \"tenant\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.19.0.0/24\", \"range_start\": \"172.19.0.30\", \"range_end\": \"172.19.0.70\", \"routes\": [ { \"dst\": \"172.19.10.0/24\", \"gw\": \"172.19.0.1\" }, { \"dst\": \"172.19.20.0/24\", \"gw\": \"172.19.0.1\" } ] } }", "oc create -f internalapi-net-attach-def.yaml oc create -f control-net-attach-def.yaml oc create -f storage-net-attach-def.yaml oc create -f tenant-net-attach-def.yaml", "apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig namespace: openstack spec: networks: - dnsDomain: ctlplane.example.com mtu: 1500 name: ctlplane subnets: - allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.170 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 name: subnet1 routes: - destination: 192.168.133.0/24 nexthop: 192.168.122.1 - destination: 192.168.144.0/24 nexthop: 192.168.122.1 - allocationRanges: - end: 192.168.133.120 start: 192.168.133.100 - end: 192.168.133.170 start: 192.168.133.150 cidr: 192.168.133.0/24 gateway: 192.168.133.1 name: subnet2 routes: - destination: 192.168.122.0/24 nexthop: 192.168.133.1 - destination: 192.168.144.0/24 nexthop: 192.168.133.1 - allocationRanges: - end: 192.168.144.120 start: 192.168.144.100 - end: 192.168.144.170 start: 192.168.144.150 cidr: 192.168.144.0/24 gateway: 192.168.144.1 name: subnet3 routes: - destination: 192.168.122.0/24 nexthop: 192.168.144.1 - destination: 192.168.133.0/24 nexthop: 192.168.144.1", "- dnsDomain: internalapi.example.com mtu: 1496 name: internalapi subnets: - allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 cidr: 172.17.0.0/24 name: subnet1 routes: - destination: 172.17.10.0/24 nexthop: 172.17.0.1 - destination: 172.17.20.0/24 nexthop: 172.17.0.1 vlan: 20 - allocationRanges: - end: 172.17.10.250 start: 172.17.10.100 cidr: 172.17.0.0/24 name: subnet2 routes: - destination: 172.17.0.0/24 nexthop: 172.17.10.1 - destination: 172.17.20.0/24 nexthop: 172.17.10.1 vlan: 30 - allocationRanges: - end: 172.17.20.250 start: 172.17.20.100 cidr: 172.17.20.0/24 name: subnet3 routes: - destination: 172.17.0.0/24 nexthop: 172.17.20.1 - destination: 172.17.10.0/24 nexthop: 172.17.20.1 vlan: 40", "- dnsDomain: external.example.com mtu: 1500 name: external subnets: - allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 name: subnet1 vlan: 22 - dnsDomain: storage.example.com mtu: 1496 name: storage subnets: - allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 name: subnet1 routes: - destination: 172.18.10.0/24 nexthop: 172.18.0.1 - destination: 172.18.20.0/24 nexthop: 172.18.0.1 vlan: 21 - allocationRanges: - end: 172.18.10.250 start: 172.18.10.100 cidr: 172.18.10.0/24 name: subnet2 routes: - destination: 172.18.0.0/24 nexthop: 172.18.10.1 - destination: 172.18.20.0/24 nexthop: 172.18.10.1 vlan: 31 - allocationRanges: - end: 172.18.20.250 start: 172.18.20.100 cidr: 172.18.20.0/24 name: subnet3 routes: - destination: 172.18.0.0/24 nexthop: 172.18.20.1 - destination: 172.18.10.0/24 nexthop: 172.18.20.1 vlan: 41", "- dnsDomain: tenant.example.com mtu: 1496 name: tenant subnets: - allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 name: subnet1 routes: - destination: 172.19.10.0/24 nexthop: 172.19.0.1 - destination: 172.19.20.0/24 nexthop: 172.19.0.1 vlan: 22 - allocationRanges: - end: 172.19.10.250 start: 172.19.10.100 cidr: 172.19.10.0/24 name: subnet2 routes: - destination: 172.19.0.0/24 nexthop: 172.19.10.1 - destination: 172.19.20.0/24 nexthop: 172.19.10.1 vlan: 32 - allocationRanges: - end: 172.19.20.250 start: 172.19.20.100 cidr: 172.19.20.0/24 name: subnet3 routes: - destination: 172.19.0.0/24 nexthop: 172.19.20.1 - destination: 172.19.10.0/24 nexthop: 172.19.20.1 vlan: 42", "- dnsDomain: storagemgmt.example.com mtu: 1500 name: storagemgmt subnets: - allocationRanges: - end: 172.20.0.250 start: 172.20.0.100 cidr: 172.20.0.0/24 name: subnet1 routes: - destination: 172.20.10.0/24 nexthop: 172.20.0.1 - destination: 172.20.20.0/24 nexthop: 172.20.0.1 vlan: 23 - allocationRanges: - end: 172.20.10.250 start: 172.20.10.100 cidr: 172.20.10.0/24 name: subnet2 routes: - destination: 172.20.0.0/24 nexthop: 172.20.10.1 - destination: 172.20.20.0/24 nexthop: 172.20.10.1 vlan: 33 - allocationRanges: - end: 172.20.20.250 start: 172.20.20.100 cidr: 172.20.20.0/24 name: subnet3 routes: - destination: 172.20.0.0/24 nexthop: 172.20.20.1 - destination: 172.20.10.0/24 nexthop: 172.20.20.1 vlan: 43", "create -f netconfig", "oc get networkpolicy -n openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret storageClass: <RHOCP_storage_class>", "cinder: uniquePodNames: false apiOverride: route: {} template: customServiceConfig: | [DEFAULT] storage_availability_zone = az0 databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderVolumes: az0: networkAttachments: - storage replicas: 0", "nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secret", "dns: template: options: - key: server values: - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2", "galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3", "keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3", "glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage", "barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1", "memcached: templates: memcached: replicas: 3", "neutron: apiOverride: route: {} template: customServiceConfig: | [DEFAULT] network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler 1 default_availability_zones = az0 [ml2_type_vlan] network_vlan_ranges = datacentre:1:1000 [neutron] physnets = datacentre replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi", "ovn: template: ovnController: external-ids\" availability-zones: - az0 enable-chassis-as-gateway: true ovn-bridge: br-int ovn-encap-type: geneve system-id: random networkAttachment: tenant nicMappings: datacentre: ospbr ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: networkAttachment: internalapi", "placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret", "rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer", "create -f openstack_control_plane.yaml -n openstack", "create secret generic ceph-conf-az-0 --from-file=az0.client.openstack.keyring --from-file=az0.conf --from-file=az1.client.openstack.keyring --from-file=az1.conf --from-file=az2.client.openstack.keyring --from-file=az2.conf -n openstack", "create secret generic ceph-conf-az-0 --from-file=az0.client.openstack.keyring --from-file=az0.conf -n openstack", "create secret generic ceph-conf-az-1 --from-file=az0.client.openstack.keyring --from-file=az0.conf --from-file=az1.client.openstack.keyring --from-file=az1.conf -n openstack", "delete secret ceph-conf-az-0 -n openstack create secret generic ceph-conf-az-0 --from-file=az0.client.openstack.keyring --from-file=az0.conf --from-file=az1.client.openstack.keyring --from-file=az1.conf -n openstack", "create secret generic ceph-conf-az-2 --from-file=az0.client.openstack.keyring --from-file=az0.conf --from-file=az2.client.openstack.keyring --from-file=az2.conf -n openstack", "delete secret ceph-conf-az-0 -n openstack create secret generic ceph-conf-az-0 --from-file=az0.client.openstack.keyring --from-file=az0.conf --from-file=az1.client.openstack.keyring --from-file=az1.conf --from-file=az1.client.openstack.keyring --from-file=az1.conf --from-file=az2.client.openstack.keyring --from-file=az2.conf", "get secret -n openstack -o name | grep ceph-conf", "secret/ceph-conf-az-0 secret/ceph-conf-az-1 secret/ceph-conf-az-2", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-dcn-0 namespace: openstack spec: nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-az-0 mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - az0 - CinderBackup extraVolType: Ceph volumes: - name: ceph secret: name: ceph-conf-az-0 mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true - propagation: - az1 extraVolType: Ceph volumes: - name: ceph secret: name: ceph-conf-az-1 mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true", "glanceAPIs: az0: customServiceConfig: | az1: customServiceConfig: |", "kind: OpenStackControlPlane spec: <...> cinder <...> cinderVolumes: az0: <...> az1: <...>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_distributed_compute_node_dcn_architecture/assembly_deploying-the-dcn-control-plane
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/making-open-source-more-inclusive_datagrid
Administration Guide
Administration Guide Red Hat Ceph Storage 8 Administration of Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team
[ "systemctl --type=service [email protected]", "systemctl start SERVICE_ID", "systemctl start [email protected]", "systemctl stop SERVICE_ID", "systemctl stop [email protected]", "systemctl restart SERVICE_ID", "systemctl restart [email protected]", "systemctl list-units \"ceph*\"", "cephadm shell", "ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 4m ago 4M count:1 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5 b7bae610cd46 crash 3/3 4m ago 4M * registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 grafana 1/1 4m ago 4M count:1 registry.redhat.io/rhceph-alpha/rhceph-6-dashboard-rhel9:latest bd3d7748747b mgr 2/2 4m ago 4M count:2 registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 mon 2/2 4m ago 10w count:2 registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 nfs.foo 0/1 - - count:1 <unknown> <unknown> node-exporter 1/3 4m ago 4M * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 mix osd.all-available-devices 5/5 4m ago 3M * registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 prometheus 1/1 4m ago 4M count:1 registry.redhat.io/openshift4/ose-prometheus:v4.6 bebb0ddef7f0 rgw.test_realm.test_zone 2/2 4m ago 3M count:2 registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510", "ceph orch start SERVICE_ID", "ceph orch start node-exporter", "ceph orch stop SERVICE_ID", "ceph orch stop node-exporter", "ceph orch restart SERVICE_ID", "ceph orch restart node-exporter", "journalctl -u ceph SERVICE_ID", "journalctl -u [email protected]", "journalctl -fu SERVICE_ID", "journalctl -fu [email protected]", "cephadm shell", "ceph -s", "ceph fs set FS_NAME max_mds 1 ceph fs fail FS_NAME ceph status ceph fs set FS_NAME joinable false", "ceph fs set cephfs max_mds 1 ceph fs fail cephfs ceph status ceph fs set cephfs joinable false", "ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause", "systemctl list-units --type target | grep ceph ceph-0b007564-ec48-11ee-b736-525400fd02f8.target loaded active active Ceph cluster 0b007564-ec48-11ee-b736-525400fd02f8 ceph.target loaded active active All Ceph clusters and services", "systemctl disable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target Removed \"/etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target\". Removed \"/etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target\".", "systemctl stop ceph-0b007564-ec48-11ee-b736-525400fd02f8.target", "shutdown Shutdown scheduled for Wed 2024-03-27 11:47:19 EDT, use 'shutdown -c' to cancel.", "systemctl enable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target Created symlink /etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target -> /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target. Created symlink /etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target -> /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target.", "systemctl start ceph-0b007564-ec48-11ee-b736-525400fd02f8.target", "ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true", "ceph -s", "cephadm shell", "ceph -s", "ceph fs set FS_NAME max_mds 1 ceph fs fail FS_NAME ceph status ceph fs set FS_NAME joinable false ceph mds fail FS_NAME : N", "ceph fs set cephfs max_mds 1 ceph fs fail cephfs ceph status ceph fs set cephfs joinable false ceph mds fail cephfs:1", "ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause", "ceph orch ls --service-type mds", "ceph orch stop SERVICE-NAME", "ceph orch ls --service-type rgw", "ceph orch stop SERVICE-NAME", "ceph orch stop alertmanager", "ceph orch stop node-exporter", "ceph orch stop prometheus", "ceph orch stop grafana", "ceph orch stop crash", "ceph orch ps --daemon-type=osd", "ceph orch daemon stop osd.1 Scheduled to stop osd.1 on host 'host02'", "ceph orch ps --daemon-type mon", "systemctl list-units ceph-* | grep mon", "systemct stop SERVICE-NAME", "cephadm shell", "ceph orch ls", "ceph -s", "ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true", "ceph -s", "root@host01 ~]# cephadm shell", "ceph health HEALTH_OK", "ceph status", "root@host01 ~]# cephadm shell", "ceph -w cluster: id: 8c9b0072-67ca-11eb-af06-001a4a0002a0 health: HEALTH_OK services: mon: 2 daemons, quorum Ceph5-2,Ceph5-adm (age 3d) mgr: Ceph5-1.nqikfh(active, since 3w), standbys: Ceph5-adm.meckej osd: 5 osds: 5 up (since 2d), 5 in (since 8w) rgw: 2 daemons active (test_realm.test_zone.Ceph5-2.bfdwcn, test_realm.test_zone.Ceph5-adm.acndrh) data: pools: 11 pools, 273 pgs objects: 459 objects, 32 KiB usage: 2.6 GiB used, 72 GiB / 75 GiB avail pgs: 273 active+clean io: client: 170 B/s rd, 730 KiB/s wr, 0 op/s rd, 729 op/s wr 2021-06-02 15:45:21.655871 osd.0 [INF] 17.71 deep-scrub ok 2021-06-02 15:45:47.880608 osd.1 [INF] 1.0 scrub ok 2021-06-02 15:45:48.865375 osd.1 [INF] 1.3 scrub ok 2021-06-02 15:45:50.866479 osd.1 [INF] 1.4 scrub ok 2021-06-02 15:45:01.345821 mon.0 [INF] pgmap v41339: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:05.718640 mon.0 [INF] pgmap v41340: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:53.997726 osd.1 [INF] 1.5 scrub ok 2021-06-02 15:45:06.734270 mon.0 [INF] pgmap v41341: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:15.722456 mon.0 [INF] pgmap v41342: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:46:06.836430 osd.0 [INF] 17.75 deep-scrub ok 2021-06-02 15:45:55.720929 mon.0 [INF] pgmap v41343: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail", "ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 TOTAL 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 5.3 MiB 3 16 MiB 0 629 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 629 GiB default.rgw.log 3 32 3.6 KiB 209 408 KiB 0 629 GiB default.rgw.control 4 32 0 B 8 0 B 0 629 GiB default.rgw.meta 5 32 1.7 KiB 10 96 KiB 0 629 GiB default.rgw.buckets.index 7 32 5.5 MiB 22 17 MiB 0 629 GiB default.rgw.buckets.data 8 32 807 KiB 3 2.4 MiB 0 629 GiB default.rgw.buckets.non-ec 9 32 1.0 MiB 1 3.1 MiB 0 629 GiB source-ecpool-86 11 32 1.2 TiB 391.13k 2.1 TiB 53.49 1.1 TiB", "ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS 3 hdd 0.90959 1.00000 931GiB 70.1GiB 69.1GiB 0B 1GiB 861GiB 7.53 2.93 66 4 hdd 0.90959 1.00000 931GiB 1.30GiB 308MiB 0B 1GiB 930GiB 0.14 0.05 59 0 hdd 0.90959 1.00000 931GiB 18.1GiB 17.1GiB 0B 1GiB 913GiB 1.94 0.76 57 MIN/MAX VAR: 0.02/2.98 STDDEV: 2.91", "cephadm shell", "ceph status", "ceph -s", "ceph ceph> status cluster: id: 499829b4-832f-11eb-8d6d-001a4a000635 health: HEALTH_WARN 1 stray daemon(s) not managed by cephadm 1/3 mons down, quorum host03,host02 too many PGs per OSD (261 > max 250) services: mon: 3 daemons, quorum host03,host02 (age 3d), out of quorum: host01 mgr: host01.hdhzwn(active, since 9d), standbys: host05.eobuuv, host06.wquwpj osd: 12 osds: 11 up (since 2w), 11 in (since 5w) rgw: 2 daemons active (test_realm.test_zone.host04.hgbvnq, test_realm.test_zone.host05.yqqilm) rgw-nfs: 1 daemon active (nfs.foo.host06-rgw) data: pools: 8 pools, 960 pgs objects: 414 objects, 1.0 MiB usage: 5.7 GiB used, 214 GiB / 220 GiB avail pgs: 960 active+clean io: client: 41 KiB/s rd, 0 B/s wr, 41 op/s rd, 27 op/s wr ceph> health HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1/3 mons down, quorum host03,host02; too many PGs per OSD (261 > max 250) ceph> mon stat e3: 3 mons at {host01=[v2:10.74.255.0:3300/0,v1:10.74.255.0:6789/0],host02=[v2:10.74.249.253:3300/0,v1:10.74.249.253:6789/0],host03=[v2:10.74.251.164:3300/0,v1:10.74.251.164:6789/0]}, election epoch 6688, leader 1 host03, quorum 1,2 host03,host02", "cephadm shell", "ceph mon stat", "ceph mon dump", "ceph quorum_status -f json-pretty", "{ \"election_epoch\": 6686, \"quorum\": [ 0, 1, 2 ], \"quorum_names\": [ \"host01\", \"host03\", \"host02\" ], \"quorum_leader_name\": \"host01\", \"quorum_age\": 424884, \"features\": { \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"monmap\": { \"epoch\": 3, \"fsid\": \"499829b4-832f-11eb-8d6d-001a4a000635\", \"modified\": \"2021-03-15T04:51:38.621737Z\", \"created\": \"2021-03-12T12:35:16.911339Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.255.0:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.255.0:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.255.0:6789/0\", \"public_addr\": \"10.74.255.0:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.251.164:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.251.164:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.251.164:6789/0\", \"public_addr\": \"10.74.251.164:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.253:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.253:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.253:6789/0\", \"public_addr\": \"10.74.249.253:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] } }", "Error 111: Connection Refused", "cephadm shell", "ceph daemon MONITOR_ID COMMAND", "ceph daemon mon.host01 help { \"add_bootstrap_peer_hint\": \"add peer address as potential bootstrap peer for cluster bringup\", \"add_bootstrap_peer_hintv\": \"add peer address vector as potential bootstrap peer for cluster bringup\", \"compact\": \"cause compaction of monitor's leveldb/rocksdb storage\", \"config diff\": \"dump diff of current config and default config\", \"config diff get\": \"dump diff get <field>: dump diff of current and default config setting <field>\", \"config get\": \"config get <field>: get the config value\", \"config help\": \"get config setting schema and descriptions\", \"config set\": \"config set <field> <val> [<val> ...]: set a config variable\", \"config show\": \"dump current config settings\", \"config unset\": \"config unset <field>: unset a config variable\", \"connection scores dump\": \"show the scores used in connectivity-based elections\", \"connection scores reset\": \"reset the scores used in connectivity-based elections\", \"counter dump\": \"dump all labeled and non-labeled counters and their values\", \"counter schema\": \"dump all labeled and non-labeled counters schemas\", \"dump_historic_ops\": \"show recent ops\", \"dump_historic_slow_ops\": \"show recent slow ops\", \"dump_mempools\": \"get mempool stats\", \"get_command_descriptions\": \"list available commands\", \"git_version\": \"get git sha1\", \"heap\": \"show heap usage info (available only if compiled with tcmalloc)\", \"help\": \"list available commands\", \"injectargs\": \"inject configuration arguments into running daemon\", \"log dump\": \"dump recent log entries to log file\", \"log flush\": \"flush log entries to log file\", \"log reopen\": \"reopen log file\", \"mon_status\": \"report status of monitors\", \"ops\": \"show the ops currently in flight\", \"perf dump\": \"dump non-labeled counters and their values\", \"perf histogram dump\": \"dump perf histogram values\", \"perf histogram schema\": \"dump perf histogram schema\", \"perf reset\": \"perf reset <name>: perf reset all or one perfcounter name\", \"perf schema\": \"dump non-labeled counters schemas\", \"quorum enter\": \"force monitor back into quorum\", \"quorum exit\": \"force monitor out of the quorum\", \"sessions\": \"list existing sessions\", \"smart\": \"Query health metrics for underlying device\", \"sync_force\": \"force sync of and clear monitor store\", \"version\": \"get ceph version\" }", "ceph daemon mon.host01 mon_status { \"name\": \"host01\", \"rank\": 0, \"state\": \"leader\", \"election_epoch\": 120, \"quorum\": [ 0, 1, 2 ], \"quorum_age\": 206358, \"features\": { \"required_con\": \"2449958747317026820\", \"required_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 3, \"fsid\": \"81a4597a-b711-11eb-8cb8-001a4a000740\", \"modified\": \"2021-05-18T05:50:17.782128Z\", \"created\": \"2021-05-17T13:13:13.383313Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.41:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.41:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.41:6789/0\", \"public_addr\": \"10.74.249.41:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.55:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.55:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.55:6789/0\", \"public_addr\": \"10.74.249.55:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.49:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.49:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.49:6789/0\", \"public_addr\": \"10.74.249.49:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] }, \"feature_map\": { \"mon\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 1 } ], \"osd\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 3 } ] }, \"stretch_mode\": false }", "ceph daemon /var/run/ceph/ SOCKET_FILE COMMAND", "ceph daemon /var/run/ceph/ceph-osd.0.asok status { \"cluster_fsid\": \"9029b252-1668-11ee-9399-001a4a000429\", \"osd_fsid\": \"1de9b064-b7a5-4c54-9395-02ccda637d21\", \"whoami\": 0, \"state\": \"active\", \"oldest_map\": 1, \"newest_map\": 58, \"num_pgs\": 33 }", "ls /var/run/ceph", "ceph osd stat", "ceph osd dump", "eNNNN: x osds: y up, z in", "ceph osd tree id weight type name up/down reweight -1 3 pool default -3 3 rack mainrack -2 3 host osd-host 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1", "systemctl start CEPH_OSD_SERVICE_ID", "systemctl start [email protected]", "cephadm shell", "ceph pg dump", "ceph pg map PG_NUM", "ceph pg map 128", "ceph pg stat", "vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail", "244 active+clean+snaptrim_wait 32 active+clean+snaptrim", "POOL_NUM . PG_ID", "0.1f", "ceph pg dump", "ceph pg dump -o FILE_NAME --format=json", "ceph pg dump -o test --format=json", "ceph pg POOL_NUM . PG_ID query", "ceph pg 5.fe query { \"snap_trimq\": \"[]\", \"snap_trimq_len\": 0, \"state\": \"active+clean\", \"epoch\": 2449, \"up\": [ 3, 8, 10 ], \"acting\": [ 3, 8, 10 ], \"acting_recovery_backfill\": [ \"3\", \"8\", \"10\" ], \"info\": { \"pgid\": \"5.ff\", \"last_update\": \"0'0\", \"last_complete\": \"0'0\", \"log_tail\": \"0'0\", \"last_user_version\": 0, \"last_backfill\": \"MAX\", \"purged_snaps\": [], \"history\": { \"epoch_created\": 114, \"epoch_pool_created\": 82, \"last_epoch_started\": 2402, \"last_interval_started\": 2401, \"last_epoch_clean\": 2402, \"last_interval_clean\": 2401, \"last_epoch_split\": 114, \"last_epoch_marked_full\": 0, \"same_up_since\": 2401, \"same_interval_since\": 2401, \"same_primary_since\": 2086, \"last_scrub\": \"0'0\", \"last_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_deep_scrub\": \"0'0\", \"last_deep_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_clean_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"prior_readable_until_ub\": 0 }, \"stats\": { \"version\": \"0'0\", \"reported_seq\": \"2989\", \"reported_epoch\": \"2449\", \"state\": \"active+clean\", \"last_fresh\": \"2021-06-18T05:16:59.401080+0000\", \"last_change\": \"2021-06-17T01:32:03.764162+0000\", \"last_active\": \"2021-06-18T05:16:59.401080+0000\", .", "pg 1.5: up=acting: [0,1,2] ADD_OSD_3 pg 1.5: up: [0,3,1] acting: [0,1,2]", "pg 1.5: up=acting: [0,3,1]", "ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}", "ceph pg dump_stuck stale OK", "ceph osd map POOL_NAME OBJECT_NAME", "ceph osd map mypool myobject", "ceph mon enable_stretch_mode host05 stretch_rule datacenter Error EINVAL: the 2 datacenter instances in the cluster have differing weights 25947 and 15728 but stretch mode currently requires they be the same!", "service_type: host addr: host01 hostname: host01 location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: host02 hostname: host02 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: host03 hostname: host03 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: host04 hostname: host04 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: host05 hostname: host05 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: host06 hostname: host06 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: host07 hostname: host07 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080", "cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url REGISTRY_URL --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "ceph osd crush add-bucket BUCKET_NAME BUCKET_TYPE", "ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter", "ceph osd crush move BUCKET_NAME root=default", "ceph osd crush move DC1 root=default ceph osd crush move DC2 root=default", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host01 datacenter=DC1", "ceph mon set_location HOST datacenter= DATACENTER", "ceph mon set_location host01 datacenter=DC1 ceph mon set_location host02 datacenter=DC1 ceph mon set_location host04 datacenter=DC2 ceph mon set_location host05 datacenter=DC2 ceph mon set_location host07 datacenter=DC3", "ceph osd getcrushmap > COMPILED_CRUSHMAP_FILENAME crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME", "ceph osd getcrushmap > crush.map.bin crushtool -d crush.map.bin -o crush.map.txt", "rule stretch_rule { id 1 1 type replicated min_size 1 max_size 10 step take DC1 2 step chooseleaf firstn 2 type host step emit step take DC2 3 step chooseleaf firstn 2 type host step emit }", "rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit }", "crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME", "crushtool -c crush.map.txt -o crush2.map.bin ceph osd setcrushmap -i crush2.map.bin", "ceph mon set election_strategy connectivity", "ceph mon set_location HOST datacenter= DATACENTER ceph mon enable_stretch_mode HOST stretch_rule datacenter", "ceph mon set_location host07 datacenter=DC3 ceph mon enable_stretch_mode host07 stretch_rule datacenter", "Error EINVAL: there are 3 datacenters in the cluster but stretch mode currently only works with 2!", "ceph osd dump epoch 361 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d created 2023-01-16T05:47:28.482717+0000 modified 2023-01-17T17:36:50.066183+0000 flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit crush_version 31 full_ratio 0.95 backfillfull_ratio 0.92 nearfull_ratio 0.85 require_min_compat_client luminous min_compat_client luminous require_osd_release quincy stretch_mode_enabled true stretch_bucket_count 2 degraded_stretch_mode 0 recovering_stretch_mode 0 stretch_mode_bucket 8", "ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host07 disallowed_leaders host07 0: [v2:132.224.169.63:3300/0,v1:132.224.169.63:6789/0] mon.host07; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host02; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host05; crush_location {datacenter=DC2} dumped monmap epoch 19", "ceph orch device ls [--hostname= HOST_1 HOST_2 ] [--wide] [--refresh]", "ceph orch device ls", "ceph orch daemon add osd HOST : DEVICE_PATH", "ceph orch daemon add osd host03:/dev/sdb", "ceph orch apply osd --all-available-devices", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host03 datacenter=DC1 ceph osd crush move host06 datacenter=DC2", "ceph config set client.rgw.rgw.1 rados_replica_read_policy localize", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000", "ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4", "vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-26T08:07:45.471+0000 7fc623e63640 1 ====== starting new request req=0x7fc5b93694a0 ===== 2024-08-26T08:07:45.471+0000 7fc623e63640 1 -- 10.0.67.142:0/279982082 --> [v2:10.0.66.23:6816/73244434,v1:10.0.66.23:6817/73244434] -- osd_op(unknown.0.0:9081 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+localize_reads+known_if_redirected+supports_pool_eio e3533) -- 0x55f781bd2000 con 0x55f77f0e8c00", "ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1", "ceph config set client.rgw.rgw.1 rados_replica_read_policy balance", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000", "ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4", "vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 ====== starting new request req=0x7f2a31fcf4a0 ===== 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 -- 10.0.67.142:0/3116867178 --> [v2:10.0.64.146:6816/2838383288,v1:10.0.64.146:6817/2838383288] -- osd_op(unknown.0.0:268731 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+balance_reads+known_if_redirected+supports_pool_eio e3554) -- 0x55cd1b88dc00 con 0x55cd18dd6000", "ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1", "ceph config set client.rgw.rgw.1 advanced rados_replica_read_policy default", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000", "ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4", "vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-28T10:26:05.155+0000 7fe6b03dd640 1 ====== starting new request req=0x7fe6879674a0 ===== 2024-08-28T10:26:05.156+0000 7fe6b03dd640 1 -- 10.0.64.251:0/2235882725 --> [v2:10.0.65.171:6800/4255735352,v1:10.0.65.171:6801/4255735352] -- osd_op(unknown.0.0:1123 11.6d 11:b69767fc:::699c2d80-5683-43c5-bdcd-e8912107c176.24827.3_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e4513) -- 0x5639da653800 con 0x5639d804d800", "ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1", "hostnamectl set-hostname SHORT_NAME", "service_type: host hostname: HOST01 addr: IP_ADDRESS01 labels: ['alertmanager', 'osd', 'installer', '_admin', 'mon', 'prometheus', 'mgr', 'grafana'] location: root: default datacenter: DC1 --- service_type: host hostname: HOST02 addr: IP_ADDRESS02 labels: ['osd', 'mon', 'mgr', 'rgw'] location: root: default datacenter: DC1 --- service_type: host hostname: HOST03 addr: IP_ADDRESS03 labels: ['osd', 'mon', 'mds'] location: root: default datacenter: DC1 --- service_type: host hostname: HOST04 addr: IP_ADDRESS04 labels: ['osd', '_admin', 'mon', 'mgr'] location: root: default datacenter: DC2 --- service_type: host hostname: HOST05 addr: IP_ADDRESS05 labels: ['osd', 'mon', 'mgr', 'rgw'] location: root: default datacenter: DC2 --- service_type: host hostname: HOST06 addr: IP_ADDRESS06 labels: ['osd', 'mon', 'mds'] location: root: default datacenter: DC2 --- service_type: host hostname: HOST07 addr: IP_ADDRESS07 labels: ['osd', '_admin', 'mon', 'mgr'] location: root: default datacenter: DC3 --- service_type: host hostname: HOST08 addr: IP_ADDRESS08 labels: ['osd', 'mon', 'mgr', 'rgw'] location: root: default datacenter: DC3 --- service_type: host hostname: HOST09 addr: IP_ADDRESS09 labels: ['osd', 'mon', 'mds'] location: root: default datacenter: DC3 --- service_type: mon service_name: mon placement: label: mon spec: crush_locations: HOST01: - datacenter=DC1 HOST02: - datacenter=DC1 HOST03: - datacenter=DC1 HOST04: - datacenter=DC2 HOST05: - datacenter=DC2 HOST06: - datacenter=DC2 HOST07: - datacenter=DC3 HOST08: - datacenter=DC3 HOST09: - datacenter=DC3 --- service_type: mgr service_name: mgr placement: label: mgr ------ service_type: osd service_id: osds placement: label: osd spec: data_devices: all: true --------- service_type: rgw service_id: rgw.rgw.1 placement: label: rgw ------------", "cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url REGISTRY_URL --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "cephadm shell", "cephadm shell", "ceph config set global public_network \" SUBNET_1 , SUBNET_2 , ...\"", "ceph config global mon public_network \"10.0.208.0/22,10.0.212.0/22,10.0.64.0/22,10.0.56.0/22\"", "ceph config set global cluster_network \" SUBNET_1 , SUBNET_2 , ...\"", "ceph config set global cluster_network \"10.0.208.0/22,10.0.212.0/22,10.0.64.0/22,10.0.56.0/22\"", "ceph config dump | grep network", "ceph config dump | grep network", "ceph orch restart mon", "systemctl restart ceph- FSID_OF_CLUSTER .target", "systemctl restart ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad.target", "ceph osd tree", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87836 root default -3 0.29279 datacenter DC1 -2 0.09760 host host01-installer 0 hdd 0.02440 osd.0 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 29 hdd 0.02440 osd.29 up 1.00000 1.00000 -4 0.09760 host host02 1 hdd 0.02440 osd.1 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 28 hdd 0.02440 osd.28 up 1.00000 1.00000 -5 0.09760 host host03 8 hdd 0.02440 osd.8 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 24 hdd 0.02440 osd.24 up 1.00000 1.00000 34 hdd 0.02440 osd.34 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host host04 4 hdd 0.02440 osd.4 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 27 hdd 0.02440 osd.27 up 1.00000 1.00000 -8 0.09760 host host05 3 hdd 0.02440 osd.3 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000 30 hdd 0.02440 osd.30 up 1.00000 1.00000 -9 0.09760 host host06 7 hdd 0.02440 osd.7 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 26 hdd 0.02440 osd.26 up 1.00000 1.00000 35 hdd 0.02440 osd.35 up 1.00000 1.00000 -11 0.29279 datacenter DC3 -10 0.09760 host host07 5 hdd 0.02440 osd.5 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 32 hdd 0.02440 osd.32 up 1.00000 1.00000 -12 0.09760 host host08 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 31 hdd 0.02440 osd.31 up 1.00000 1.00000 -13 0.09760 host host09 6 hdd 0.02440 osd.6 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 25 hdd 0.02440 osd.25 up 1.00000 1.00000 33 hdd 0.02440 osd.33 up 1.00000 1.00000", "ceph mon dump", "ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 8m ago 6d count:1 ceph-exporter 9/9 8m ago 6d * crash 9/9 8m ago 6d * grafana ?:3000 1/1 8m ago 6d count:1 mds.cephfs 3/3 8m ago 6d label:mds mgr 6/6 8m ago 6d label:mgr mon 9/9 8m ago 5d label:mon node-exporter ?:9100 9/9 8m ago 6d * osd.all-available-devices 36 8m ago 6d label:osd prometheus ?:9095 1/1 8m ago 6d count:1 rgw.rgw.1 ?:80 3/3 8m ago 6d label:rgw", "ceph orch ls mon --export service_type: mon service_name: mon placement: label: mon spec: crush_locations: host01-installer: - datacenter=DC1 host02: - datacenter=DC1 host03: - datacenter=DC1 host04: - datacenter=DC2 host05: - datacenter=DC2 host06: - datacenter=DC2 host07: - datacenter=DC3 host08: - datacenter=DC3 host09: - datacenter=DC3", "ceph osd getcrushmap > COMPILED_CRUSHMAP_FILENAME crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME", "ceph osd getcrushmap > crush.map.bin crushtool -d crush.map.bin -o crush.map.txt", "rule 3az_rule { id 1 type replicated step take default step choose firstn 3 type datacenter step chooseleaf firstn 2 type host step emit }", "crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME", "crushtool -c crush.map.txt -o crush2.map.bin ceph osd setcrushmap -i crush2.map.bin", "ceph osd crush rule ls", "ceph osd crush rule ls replicated_rule ec86_pool 3az_rule", "ceph osd crush rule dump CRUSH_RULE", "ceph osd crush rule dump 3az_rule { \"rule_id\": 1, \"rule_name\": \"3az_rule\", \"type\": 1, \"steps\": [ { \"op\": \"take\", \"item\": -1, \"item_name\": \"default\" }, { \"op\": \"choose_firstn\", \"num\": 3, \"type\": \"datacenter\" }, { \"op\": \"chooseleaf_firstn\", \"num\": 2, \"type\": \"host\" }, { \"op\": \"emit\" } ] }", "ceph mon set election_strategy connectivity", "ceph mon dump", "ceph mon dump epoch 19 fsid b556497a-693a-11ef-b9d1-fa163e841fd7 last_changed 2024-09-03T12:47:08.419495+0000 created 2024-09-02T14:50:51.490781+0000 min_mon_release 19 (squid) election_strategy: 3 0: [v2:10.0.67.43:3300/0,v1:10.0.67.43:6789/0] mon.host01-installer; crush_location {datacenter=DC1} 1: [v2:10.0.67.20:3300/0,v1:10.0.67.20:6789/0] mon.host02; crush_location {datacenter=DC1} 2: [v2:10.0.64.242:3300/0,v1:10.0.64.242:6789/0] mon.host03; crush_location {datacenter=DC1} 3: [v2:10.0.66.17:3300/0,v1:10.0.66.17:6789/0] mon.host06; crush_location {datacenter=DC2} 4: [v2:10.0.66.228:3300/0,v1:10.0.66.228:6789/0] mon.host09; crush_location {datacenter=DC3} 5: [v2:10.0.65.125:3300/0,v1:10.0.65.125:6789/0] mon.host05; crush_location {datacenter=DC2} 6: [v2:10.0.66.252:3300/0,v1:10.0.66.252:6789/0] mon.host07; crush_location {datacenter=DC3} 7: [v2:10.0.64.145:3300/0,v1:10.0.64.145:6789/0] mon.host08; crush_location {datacenter=DC3} 8: [v2:10.0.64.125:3300/0,v1:10.0.64.125:6789/0] mon.host04; crush_location {datacenter=DC2} dumped monmap epoch 19", "ceph osd pool stretch set _POOL_NAME_ _PEERING_CRUSH_BUCKET_COUNT_ _PEERING_CRUSH_BUCKET_TARGET_ _PEERING_CRUSH_BUCKET_BARRIER_ _CRUSH_RULE_ _SIZE_ _MIN_SIZE_ [--yes-i-really-mean-it]", "ceph osd pool stretch set pool01 2 3 datacenter 3az_rule 6 3", "ceph osd pool stretch show pool01 pool: pool01 pool_id: 1 is_stretch_pool: 1 peering_crush_bucket_count: 2 peering_crush_bucket_target: 3 peering_crush_bucket_barrier: 8 crush_rule: 3az_rule size: 6 min_size: 3", "ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@host11 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host12", "ceph orch daemon add osd _HOST_:_DEVICE_PATH_", "ceph orch daemon add osd host11:/dev/sdb", "ceph orch apply osd --all-available-devices", "ceph osd crush move HOST datacenter= DATACENTER", "ceph osd crush move host10 datacenter=DC1 ceph osd crush move host11 datacenter=DC2 ceph osd crush move host12 datacenter=DC3", "ceph osd set FLAG", "ceph osd set noout", "ceph osd unset FLAG", "ceph osd unset noout", "DAEMON_TYPE 'allow CAPABILITY ' [ DAEMON_TYPE 'allow CAPABILITY ']", "mon 'allow rwx` mon 'allow profile osd'", "osd 'allow CAPABILITY ' [pool= POOL_NAME ] [namespace= NAMESPACE_NAME ]", "ceph auth list installed auth entries: osd.10 key: AQBW7U5gqOsEExAAg/CxSwZ/gSh8iOsDV3iQOA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.11 key: AQBX7U5gtj/JIhAAPsLBNG+SfC2eMVEFkl3vfA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.9 key: AQBV7U5g1XDULhAAKo2tw6ZhH1jki5aVui2v7g== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQADYEtgFfD3ExAAwH+C1qO7MSLE4TWRfD2g6g== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQAHYEtgpbkANBAANqoFlvzEXFwD8oB0w3TF4Q== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQAHYEtg3dcANBAAVQf6brq3sxTSrCrPe0pKVQ== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQAHYEtgD/QANBAATS9DuP3DbxEl86MTyKEmdw== caps: [mon] allow profile bootstrap-osd client.bootstrap-rbd key: AQAHYEtgjxEBNBAANho25V9tWNNvIKnHknW59A== caps: [mon] allow profile bootstrap-rbd client.bootstrap-rbd-mirror key: AQAHYEtgdE8BNBAAr6rLYxZci0b2hoIgH9GXYw== caps: [mon] allow profile bootstrap-rbd-mirror client.bootstrap-rgw key: AQAHYEtgwGkBNBAAuRzI4WSrnowBhZxr2XtTFg== caps: [mon] allow profile bootstrap-rgw client.crash.host04 key: AQCQYEtgz8lGGhAAy5bJS8VH9fMdxuAZ3CqX5Q== caps: [mgr] profile crash caps: [mon] profile crash client.crash.host02 key: AQDuYUtgqgfdOhAAsyX+Mo35M+HFpURGad7nJA== caps: [mgr] profile crash caps: [mon] profile crash client.crash.host03 key: AQB98E5g5jHZAxAAklWSvmDsh2JaL5G7FvMrrA== caps: [mgr] profile crash caps: [mon] profile crash client.nfs.foo.host03 key: AQCgTk9gm+HvMxAAHbjG+XpdwL6prM/uMcdPdQ== caps: [mon] allow r caps: [osd] allow rw pool=nfs-ganesha namespace=foo client.nfs.foo.host03-rgw key: AQCgTk9g8sJQNhAAPykcoYUuPc7IjubaFx09HQ== caps: [mon] allow r caps: [osd] allow rwx tag rgw *=* client.rgw.test_realm.test_zone.host01.hgbvnq key: AQD5RE9gAQKdCRAAJzxDwD/dJObbInp9J95sXw== caps: [mgr] allow rw caps: [mon] allow * caps: [osd] allow rwx tag rgw *=* client.rgw.test_realm.test_zone.host02.yqqilm key: AQD0RE9gkxA4ExAAFXp3pLJWdIhsyTe2ZR6Ilw== caps: [mgr] allow rw caps: [mon] allow * caps: [osd] allow rwx tag rgw *=* mgr.host01.hdhzwn key: AQAEYEtg3lhIBxAAmHodoIpdvnxK0llWF80ltQ== caps: [mds] allow * caps: [mon] profile mgr caps: [osd] allow * mgr.host02.eobuuv key: AQAn6U5gzUuiABAA2Fed+jPM1xwb4XDYtrQxaQ== caps: [mds] allow * caps: [mon] profile mgr caps: [osd] allow * mgr.host03.wquwpj key: AQAd6U5gIzWsLBAAbOKUKZlUcAVe9kBLfajMKw== caps: [mds] allow * caps: [mon] profile mgr caps: [osd] allow *", "ceph auth export TYPE . ID", "ceph auth export mgr.host02.eobuuv", "ceph auth export TYPE . ID -o FILE_NAME", "ceph auth export osd.9 -o filename export auth(key=AQBV7U5g1XDULhAAKo2tw6ZhH1jki5aVui2v7g==)", "ceph auth add client.john mon 'allow r' osd 'allow rw pool=mypool' ceph auth get-or-create client.paul mon 'allow r' osd 'allow rw pool=mypool' ceph auth get-or-create client.george mon 'allow r' osd 'allow rw pool=mypool' -o george.keyring ceph auth get-or-create-key client.ringo mon 'allow r' osd 'allow rw pool=mypool' -o ringo.key", "ceph auth caps USERTYPE . USERID DAEMON 'allow [r|w|x|*|...] [pool= POOL_NAME ] [namespace= NAMESPACE_NAME ]'", "ceph auth caps client.john mon 'allow r' osd 'allow rw pool=mypool' ceph auth caps client.paul mon 'allow rw' osd 'allow rwx pool=mypool' ceph auth caps client.brian-manager mon 'allow *' osd 'allow *'", "ceph auth caps client.ringo mon ' ' osd ' '", "ceph auth del TYPE . ID", "ceph auth del osd.6", "ceph auth print-key TYPE . ID", "ceph auth print-key osd.6 AQBQ7U5gAry3JRAA3NoPrqBBThpFMcRL6Sr+5w==[ceph: root@host01 /]#", "ceph-volume lvm", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring", "ceph-volume lvm prepare --bluestore --data VOLUME_GROUP / LOGICAL_VOLUME", "ceph-volume lvm prepare --bluestore --data example_vg/data_lv", "ceph-volume lvm prepare --bluestore --block.db BLOCK_DB_DEVICE --block.wal BLOCK_WAL_DEVICE --data DATA_DEVICE", "ceph-volume lvm prepare --bluestore --block.db /dev/sda --block.wal /dev/sdb --data /dev/sdc", "ceph-volume lvm prepare --bluestore --dmcrypt --data VOLUME_GROUP / LOGICAL_VOLUME", "ceph-volume lvm prepare --bluestore --dmcrypt --data example_vg/data_lv", "1 devices have fault light turned on", "ceph-volume lvm list ====== osd.6 ======= [block] /dev/ceph-83909f70-95e9-4273-880e-5851612cbe53/osd-block-7ce687d9-07e7-4f8f-a34e-d1b0efb89920 block device /dev/ceph-83909f70-95e9-4273-880e-5851612cbe53/osd-block-7ce687d9-07e7-4f8f-a34e-d1b0efb89920 block uuid 4d7gzX-Nzxp-UUG0-bNxQ-Jacr-l0mP-IPD8cX cephx lockbox secret cluster fsid 1ca9f6a8-d036-11ec-8263-fa163ee967ad cluster name ceph crush device class None encrypted 0 osd fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89920 osd id 6 osdspec affinity all-available-devices type block vdo 0 devices /dev/vdc", "ceph device ls-lights { \"fault\": [ \"SEAGATE_ST12000NM002G_ZL2KTGCK0000C149\" ], \"ident\": [] }", "ceph device light off DEVICE_NAME FAULT/INDENT --force", "ceph device light off SEAGATE_ST12000NM002G_ZL2KTGCK0000C149 fault --force", "ceph-volume lvm list", "ceph-volume lvm activate --bluestore OSD_ID OSD_FSID", "ceph-volume lvm activate --bluestore 10 7ce687d9-07e7-4f8f-a34e-d1b0efb89920", "ceph-volume lvm activate --all", "ceph-volume lvm trigger SYSTEMD_DATA", "ceph-volume lvm trigger 10 7ce687d9-07e7-4f8f-a34e-d1b0efb89920", "ceph-volume lvm list", "ceph-volume lvm deactivate OSD_ID", "ceph-volume lvm deactivate 16", "ceph-volume lvm create --bluestore --data VOLUME_GROUP / LOGICAL_VOLUME", "ceph-volume lvm create --bluestore --data example_vg/data_lv", "cephadm shell", "ceph orch daemon stop osd.1", "cephadm shell --mount /var/lib/ceph/72436d46-ca06-11ec-9809-ac1f6b5635ee/osd.1:/var/lib/ceph/osd/ceph-1", "ceph-volume lvm new-db --osd-id OSD_ID --osd-fsid OSD_FSID --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm new-db --osd-id 1 --osd-fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89921 --target vgname/new_db ceph-volume lvm new-wal --osd-id 1 --osd-fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89921 --target vgname/new_wal", "ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data --target vgname/db", "ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data --target vgname/new_db", "ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from db --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from db --target vgname/new_db", "ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data db --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data db --target vgname/new_db", "ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data db wal --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data db --target vgname/new_db", "ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from db wal --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from db wal --target vgname/data", "ceph-volume lvm list ====== osd.3 ======= [db] /dev/db-test/db1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type db vdo 0 devices /dev/vdh [block] /dev/test/lv1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type block vdo 0 devices /dev/vdg", "vgs VG #PV #LV #SN Attr VSize VFree db-test 1 1 0 wz--n- <200.00g <160.00g test 1 1 0 wz--n- <200.00g <170.00g", "systemctl stop [email protected]", "lvresize -l 100%FREE /dev/db-test/db1 Size of logical volume db-test/db1 changed from 40.00 GiB (10240 extents) to <160.00 GiB (40959 extents). Logical volume db-test/db1 successfully resized.", "cephadm shell -m /var/lib/ceph/ CLUSTER_FSID /osd. OSD_ID :/var/lib/ceph/osd/ceph- OSD_ID :z", "cephadm shell -m /var/lib/ceph/1a6112da-ed05-11ee-bacd-525400565cda/osd.3:/var/lib/ceph/osd/ceph-3:z", "ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH", "ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { \"/var/lib/ceph/osd/ceph-3/block\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 32212254720, \"btime\": \"2024-04-03T08:34:12.742848+0000\", \"description\": \"main\", \"bfm_blocks\": \"7864320\", \"bfm_blocks_per_key\": \"128\", \"bfm_bytes_per_block\": \"4096\", \"bfm_size\": \"32212254720\", \"bluefs\": \"1\", \"ceph_fsid\": \"1a6112da-ed05-11ee-bacd-525400565cda\", \"ceph_version_when_created\": \"ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)\", \"created_at\": \"2024-04-03T08:34:15.637253Z\", \"elastic_shared_blobs\": \"1\", \"kv_backend\": \"rocksdb\", \"magic\": \"ceph osd volume v026\", \"mkfs_done\": \"yes\", \"osd_key\": \"AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==\", \"osdspec_affinity\": \"None\", \"ready\": \"ready\", \"require_osd_release\": \"19\", \"whoami\": \"3\" }, \"/var/lib/ceph/osd/ceph-3/block.db\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 40794497536, \"btime\": \"2024-04-03T08:34:12.748816+0000\", \"description\": \"bluefs db\" } }", "ceph-bluestore-tool bluefs-bdev-expand --path OSD_DIRECTORY_PATH", "ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path 1 : device size 0x27ffbfe000 : using 0x2300000(35 MiB) 2 : device size 0x780000000 : using 0x52000(328 KiB) Expanding DB/WAL 1 : expanding to 0x171794497536 1 : size label updated to 171794497536", "ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH", "ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { \"/var/lib/ceph/osd/ceph-3/block\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 32212254720, \"btime\": \"2024-04-03T08:34:12.742848+0000\", \"description\": \"main\", \"bfm_blocks\": \"7864320\", \"bfm_blocks_per_key\": \"128\", \"bfm_bytes_per_block\": \"4096\", \"bfm_size\": \"32212254720\", \"bluefs\": \"1\", \"ceph_fsid\": \"1a6112da-ed05-11ee-bacd-525400565cda\", \"ceph_version_when_created\": \"ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)\", \"created_at\": \"2024-04-03T08:34:15.637253Z\", \"elastic_shared_blobs\": \"1\", \"kv_backend\": \"rocksdb\", \"magic\": \"ceph osd volume v026\", \"mkfs_done\": \"yes\", \"osd_key\": \"AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==\", \"osdspec_affinity\": \"None\", \"ready\": \"ready\", \"require_osd_release\": \"19\", \"whoami\": \"3\" }, \"/var/lib/ceph/osd/ceph-3/block.db\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 171794497536, \"btime\": \"2024-04-03T08:34:12.748816+0000\", \"description\": \"bluefs db\" } }", "systemctl start [email protected] osd.3 host01 running (15s) 0s ago 13m 46.9M 4096M 19.0.0-2493-gd82c9aa1 3714003597ec 02150b3b6877", "ceph-volume lvm batch --bluestore PATH_TO_DEVICE [ PATH_TO_DEVICE ]", "ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/nvme0n1", "ceph-volume lvm zap VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME [--destroy]", "ceph-volume lvm zap osd-vg/data-lv", "ceph-volume lvm zap DEVICE_PATH_PARTITION [--destroy]", "ceph-volume lvm zap /dev/sdc1", "ceph-volume lvm zap DEVICE_PATH --destroy", "ceph-volume lvm zap /dev/sdc --destroy", "ceph-volume lvm zap --destroy --osd-id OSD_ID", "ceph-volume lvm zap --destroy --osd-id 16", "ceph-volume lvm zap --destroy --osd-fsid OSD_FSID", "ceph-volume lvm zap --destroy --osd-fsid 65d7b6b1-e41a-4a3c-b363-83ade63cb32b", "echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync", "ceph osd pool create testbench 100 100", "rados bench -p testbench 10 write --no-cleanup Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects Object prefix: benchmark_data_cephn1.home.network_10510 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 16 0 0 0 - 0 2 16 16 0 0 0 - 0 3 16 16 0 0 0 - 0 4 16 17 1 0.998879 1 3.19824 3.19824 5 16 18 2 1.59849 4 4.56163 3.87993 6 16 18 2 1.33222 0 - 3.87993 7 16 19 3 1.71239 2 6.90712 4.889 8 16 25 9 4.49551 24 7.75362 6.71216 9 16 25 9 3.99636 0 - 6.71216 10 16 27 11 4.39632 4 9.65085 7.18999 11 16 27 11 3.99685 0 - 7.18999 12 16 27 11 3.66397 0 - 7.18999 13 16 28 12 3.68975 1.33333 12.8124 7.65853 14 16 28 12 3.42617 0 - 7.65853 15 16 28 12 3.19785 0 - 7.65853 16 11 28 17 4.24726 6.66667 12.5302 9.27548 17 11 28 17 3.99751 0 - 9.27548 18 11 28 17 3.77546 0 - 9.27548 19 11 28 17 3.57683 0 - 9.27548 Total time run: 19.505620 Total writes made: 28 Write size: 4194304 Bandwidth (MB/sec): 5.742 Stddev Bandwidth: 5.4617 Max bandwidth (MB/sec): 24 Min bandwidth (MB/sec): 0 Average Latency: 10.4064 Stddev Latency: 3.80038 Max latency: 19.503 Min latency: 3.19824", "rados bench -p testbench 10 seq sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 Total time run: 0.804869 Total reads made: 28 Read size: 4194304 Bandwidth (MB/sec): 139.153 Average Latency: 0.420841 Max latency: 0.706133 Min latency: 0.0816332", "rados bench -p testbench 10 rand sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 46 30 119.801 120 0.440184 0.388125 2 16 81 65 129.408 140 0.577359 0.417461 3 16 120 104 138.175 156 0.597435 0.409318 4 15 157 142 141.485 152 0.683111 0.419964 5 16 206 190 151.553 192 0.310578 0.408343 6 16 253 237 157.608 188 0.0745175 0.387207 7 16 287 271 154.412 136 0.792774 0.39043 8 16 325 309 154.044 152 0.314254 0.39876 9 16 362 346 153.245 148 0.355576 0.406032 10 16 405 389 155.092 172 0.64734 0.398372 Total time run: 10.302229 Total reads made: 405 Read size: 4194304 Bandwidth (MB/sec): 157.248 Average Latency: 0.405976 Max latency: 1.00869 Min latency: 0.0378431", "rados bench -p testbench 10 write -t 4 --run-name client1 Maintaining 4 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects Object prefix: benchmark_data_node1_12631 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 4 4 0 0 0 - 0 2 4 6 2 3.99099 4 1.94755 1.93361 3 4 8 4 5.32498 8 2.978 2.44034 4 4 8 4 3.99504 0 - 2.44034 5 4 10 6 4.79504 4 2.92419 2.4629 6 3 10 7 4.64471 4 3.02498 2.5432 7 4 12 8 4.55287 4 3.12204 2.61555 8 4 14 10 4.9821 8 2.55901 2.68396 9 4 16 12 5.31621 8 2.68769 2.68081 10 4 17 13 5.18488 4 2.11937 2.63763 11 4 17 13 4.71431 0 - 2.63763 12 4 18 14 4.65486 2 2.4836 2.62662 13 4 18 14 4.29757 0 - 2.62662 Total time run: 13.123548 Total writes made: 18 Write size: 4194304 Bandwidth (MB/sec): 5.486 Stddev Bandwidth: 3.0991 Max bandwidth (MB/sec): 8 Min bandwidth (MB/sec): 0 Average Latency: 2.91578 Stddev Latency: 0.956993 Max latency: 5.72685 Min latency: 1.91967", "rados -p testbench cleanup", "rbd bench --io-type write image01 --pool=testbench bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq SEC OPS OPS/SEC BYTES/SEC 2 11127 5479.59 22444382.79 3 11692 3901.91 15982220.33 4 12372 2953.34 12096895.42 5 12580 2300.05 9421008.60 6 13141 2101.80 8608975.15 7 13195 356.07 1458459.94 8 13820 390.35 1598876.60 9 14124 325.46 1333066.62 ..", "cd /mnt/ceph-block-device cd /mnt/ceph-file-system", "fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=4k --iodepth=32 --size=5G --runtime=60 --group_reporting=1", "fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=8k --iodepth=32 --size=5G --runtime=60 --group_reporting=1", "time s3cmd put PATH_OF_SOURCE_FILE PATH_OF_DESTINATION_FILE", "time s3cmd put /path-to-local-file s3://bucket-name/remote/file", "time s3cmd get PATH_OF_DESTINATION_FILE DESTINATION_PATH", "time s3cmd get s3://bucket-name/remote/file /path-to-local-destination", "time s3cmd ls s3:// BUCKET_NAME", "time s3cmd ls s3://bucket-name", "ceph daemon DAEMON_NAME perf schema", "ceph daemon mon.host01 perf schema", "ceph daemon osd.11 perf schema", "ceph daemon DAEMON_NAME perf dump", "ceph daemon mon.host01 perf dump", "ceph daemon osd.11 perf dump", "cephadm shell", "ceph config set osd. OSD_ID osd_mclock_profile VALUE", "ceph config set osd.0 osd_mclock_profile high_recovery_ops", "ceph config set osd osd_mclock_profile VALUE", "cephadm shell", "ceph config set osd. OSD_ID osd_mclock_profile custom", "ceph config set osd.0 osd_mclock_profile custom", "ceph config set osd osd_mclock_profile custom", "ceph config set osd. OSD_ID MCLOCK_CONFIGURATION_OPTION VALUE", "ceph config set osd.0 osd_mclock_scheduler_client_res 0.5", "cephadm shell", "ceph config set osd osd_mclock_profile MCLOCK_PROFILE", "ceph config set osd osd_mclock_profile high_client_ops", "ceph config dump", "ceph config rm osd MCLOCK_CONFIGURATION_OPTION", "ceph config rm osd osd_mclock_scheduler_client_res", "ceph config show osd. OSD_ID", "ceph config show osd.0", "cephadm shell", "ceph tell osd. OSD_ID injectargs '-- MCLOCK_CONFIGURATION_OPTION = VALUE '", "ceph tell osd.0 injectargs '--osd_mclock_profile=high_recovery_ops'", "ceph daemon osd. OSD_ID config set MCLOCK_CONFIGURATION_OPTION VALUE", "ceph daemon osd.0 config set osd_mclock_profile high_recovery_ops", "cephadm shell", "ceph config set osd osd_mclock_override_recovery_settings true", "ceph config set osd OPTION VALUE", "ceph config set osd osd_max_backfills_ 5", "ceph config show osd. OSD_ID_ | grep OPTION", "ceph config show osd.0 | grep osd_max_backfills", "ceph config set osd osd_mclock_override_recovery_settings false", "ceph config show osd.0 osd_mclock_iops_capacity_threshold_hdd 500.000000 ceph config show osd.0 osd_mclock_iops_capacity_threshold_ssd 80000.000000", "ceph config show osd. N osd_mclock_max_capacity_iops_[hdd, ssd]", "ceph config show osd.0 osd_mclock_max_capacity_iops_ssd", "3403 Sep 11 11:52:50 dell-r640-039.dsal.lab.eng.rdu2.redhat.com ceph-osd[70342]: log_channel(cluster) log [WRN] : OSD bench result of 49691.213005 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.27. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].", "cephadm shell", "ceph config show osd. OSD_ID osd_mclock_max_capacity_iops_[hdd, ssd]", "ceph config show osd.0 osd_mclock_max_capacity_iops_ssd 21500.000000", "cephadm shell", "ceph tell osd. OSD_ID bench [ TOTAL_BYTES ] [ BYTES_PER_WRITE ] [ OBJ_SIZE ] [ NUM_OBJS ]", "ceph tell osd.0 bench 12288000 4096 4194304 100 { \"bytes_written\": 12288000, \"blocksize\": 4096, \"elapsed_sec\": 1.3718913019999999, \"bytes_per_sec\": 8956977.8466311768, \"iops\": 2186.7621695876896 }", "ceph tell osd. OSD_ID cache drop", "ceph tell osd.0 cache drop", "cephadm shell", "ceph tell osd. OSD_ID bench 12288000 4096 4194304 100", "ceph tell osd.0 bench 12288000 4096 4194304 100 { \"bytes_written\": 12288000, \"blocksize\": 4096, \"elapsed_sec\": 1.3718913019999999, \"bytes_per_sec\": 8956977.8466311768, \"iops\": 2186.7621695876896 1 }", "ceph config set osd. OSD_ID bluestore_throttle_bytes 32768 ceph config set osd. OSD_ID bluestore_throttle_deferred_bytes 32768", "ceph config set osd.0 bluestore_throttle_bytes 32768 ceph config set osd.0 bluestore_throttle_deferred_bytes 32768", "ceph tell osd.0 bench 12288000 4096 4194304 100", "cephadm shell", "ceph config set osd. OSD_ID osd_mclock_max_capacity_iops_[hdd,ssd] VALUE", "ceph config set osd.0 osd_mclock_max_capacity_iops_hdd 350", "ceph config set osd. OSD_ID bluestore_min_alloc_size_DEVICE_NAME_ VALUE", "ceph config set osd.4 bluestore_min_alloc_size_hdd 8192", "systemctl restart SERVICE_ID", "systemctl restart [email protected]", "ceph daemon osd. OSD_ID config get bluestore_min_alloc_size__DEVICE_", "ceph daemon osd.4 config get bluestore_min_alloc_size_hdd ceph daemon osd.4 config get bluestore_min_alloc_size { \"bluestore_min_alloc_size\": \"8192\" }", "cd /usr/share/cephadm-ansible", "ansible-playbook -i hosts rocksdb-resharding.yml -e osd_id= OSD_ID -e admin_node= HOST_NAME", "ansible-playbook -i hosts rocksdb-resharding.yml -e osd_id=7 -e admin_node=host03 ............ TASK [stop the osd] *********************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:18 +0000 (0:00:00.037) 0:00:03.864 **** changed: [localhost -> host03] TASK [set_fact ceph_cmd] ****************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:32 +0000 (0:00:14.128) 0:00:17.992 **** ok: [localhost -> host03] TASK [check fs consistency with fsck before resharding] *********************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:32 +0000 (0:00:00.041) 0:00:18.034 **** ok: [localhost -> host03] TASK [show current sharding] ************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:43 +0000 (0:00:11.053) 0:00:29.088 **** ok: [localhost -> host03] TASK [reshard] **************************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:45 +0000 (0:00:01.446) 0:00:30.534 **** ok: [localhost -> host03] TASK [check fs consistency with fsck after resharding] ************************************************************************************************************************************************************ Wednesday 29 November 2023 11:25:46 +0000 (0:00:01.479) 0:00:32.014 **** ok: [localhost -> host03] TASK [restart the osd] ******************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:57 +0000 (0:00:10.699) 0:00:42.714 **** changed: [localhost -> host03]", "ceph orch daemon stop osd.7", "cephadm shell --name osd.7", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-7/ show-sharding m(3) p(3,0-12) O(3,0-13) L P", "ceph orch daemon start osd.7", "cephadm shell", "ceph orch ps", "cephadm unit --name OSD_ID stop", "cephadm unit --name osd.0 stop", "cephadm shell --name OSD_ID", "cephadm shell --name osd.0", "ceph-bluestore-tool --path/var/lib/ceph/osd/ceph- OSD_ID / fsck", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-0/ fsck fsck success", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph- OSD_ID / show-sharding", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-6/ show-sharding m(3) p(3,0-12) O(3,0-13) L P", "ceph-bluestore-tool --log-level 10 -l log.txt --path /var/lib/ceph/osd/ceph- OSD_ID / --sharding=\"m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P\" reshard", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-6/ --sharding=\"m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P\" reshard reshard success", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph- OSD_ID / show-sharding", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-6/ show-sharding m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P", "exit", "cephadm unit --name OSD_ID start", "cephadm unit --name osd.0 start", "ceph daemon OSD_ID bluestore allocator score block", "ceph daemon osd.123 bluestore allocator score block", "ceph daemon OSD_ID bluestore allocator dump block", "ceph daemon osd.123 bluestore allocator dump block", "cephadm shell --name osd. ID", "cephadm shell --name osd.2 Inferring fsid 110bad0a-bc57-11ee-8138-fa163eb9ffc2 Inferring config /var/lib/ceph/110bad0a-bc57-11ee-8138-fa163eb9ffc2/osd.2/config Using ceph image with id `17334f841482` and tag `ceph-7-rhel-9-containers-candidate-59483-20240301201929` created on 2024-03-01 20:22:41 +0000 UTC registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:09fc3e5baf198614d70669a106eb87dbebee16d4e91484375778d4adbccadacd", "ceph-bluestore-tool --path PATH_TO_OSD_DATA_DIRECTORY --allocator block free-score", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-score", "ceph-bluestore-tool --path PATH_TO_OSD_DATA_DIRECTORY --allocator block free-dump block: { \"fragmentation_rating\": 0.018290238194701977 }", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-dump block: { \"capacity\": 21470642176, \"alloc_unit\": 4096, \"alloc_type\": \"hybrid\", \"alloc_name\": \"block\", \"extents\": [ { \"offset\": \"0x370000\", \"length\": \"0x20000\" }, { \"offset\": \"0x3a0000\", \"length\": \"0x10000\" }, { \"offset\": \"0x3f0000\", \"length\": \"0x20000\" }, { \"offset\": \"0x460000\", \"length\": \"0x10000\" },", "cephadm shell", "ceph config get osd bluefs_buffered_io", "ceph config get OSD_ID bluefs_buffered_io", "ceph config get osd.2 bluefs_buffered_io", "ceph config show OSD_ID bluefs_buffered_io", "ceph config show osd.3 bluefs_buffered_io", "cephadm shell", "ceph daemon osd. OSD_ID bluefs stats", "ceph daemon osd.1 bluefs stats 1 : device size 0x3bfc00000 : using 0x1a428000(420 MiB) wal_total:0, db_total:15296836403, slow_total:0", "ceph daemon osd.1 bluefs stats 0 : 1 : device size 0x1dfbfe000 : using 0x1100000(17 MiB) 2 : device size 0x27fc00000 : using 0x248000(2.3 MiB) RocksDBBlueFSVolumeSelector: wal_total:0, db_total:7646425907, slow_total:10196562739, db_avail:935539507 Usage matrix: DEV/LEV WAL DB SLOW * * REAL FILES LOG 0 B 4 MiB 0 B 0 B 0 B 756 KiB 1 WAL 0 B 4 MiB 0 B 0 B 0 B 3.3 MiB 1 DB 0 B 9 MiB 0 B 0 B 0 B 76 KiB 10 SLOW 0 B 0 B 0 B 0 B 0 B 0 B 0 TOTALS 0 B 17 MiB 0 B 0 B 0 B 0 B 12 MAXIMUMS: LOG 0 B 4 MiB 0 B 0 B 0 B 756 KiB WAL 0 B 4 MiB 0 B 0 B 0 B 3.3 MiB DB 0 B 11 MiB 0 B 0 B 0 B 112 KiB SLOW 0 B 0 B 0 B 0 B 0 B 0 B TOTALS 0 B 17 MiB 0 B 0 B 0 B 0 B", "ceph-bluestore-tool COMMAND [ --dev DEVICE ... ] [ -i OSD_ID ] [ --path OSD_PATH ] [ --out-dir DIR ] [ --log-file | -l filename ] [ --deep ] ceph-bluestore-tool fsck|repair --path OSD_PATH [ --deep ] ceph-bluestore-tool qfsck --path OSD_PATH ceph-bluestore-tool allocmap --path OSD_PATH ceph-bluestore-tool restore_cfb --path OSD_PATH ceph-bluestore-tool show-label --dev DEVICE ... ceph-bluestore-tool prime-osd-dir --dev DEVICE --path OSD_PATH ceph-bluestore-tool bluefs-export --path OSD_PATH --out-dir DIR ceph-bluestore-tool bluefs-bdev-new-wal --path OSD_PATH --dev-target NEW_DEVICE ceph-bluestore-tool bluefs-bdev-new-db --path OSD_PATH --dev-target NEW_DEVICE ceph-bluestore-tool bluefs-bdev-migrate --path OSD_PATH --dev-target NEW_DEVICE --devs-source DEVICE1 [--devs-source DEVICE2 ] ceph-bluestore-tool free-dump|free-score --path OSD_PATH [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ] ceph-bluestore-tool reshard --path OSD_PATH --sharding NEW_SHARDING [ --sharding-ctrl CONTROL_STRING ] ceph-bluestore-tool show-sharding --path OSD_PATH", "ceph orch daemon stop osd. ID", "ceph orch daemon stop osd.2", "cephadm shell --name osd. ID", "ceph shell --name osd.2", "ceph-bluestore-tool bluefs-bdev-new-wal --dev-target /dev/test/newdb --path /var/lib/ceph/osd/ceph-0", "ceph orch daemon start osd. ID", "ceph orch daemon start osd.2", "ceph tell OSD_ID dump_metrics ceph tell OSD_ID dump_metrics reactor_utilization", "ceph tell osd.0 dump_metrics ceph tell osd.0 dump_metrics reactor_utilization", "cephadm --image quay.ceph.io/ceph-ci/ceph:b682861f8690608d831f58603303388dd7915aa7-crimson bootstrap --mon-ip 10.1.240.54 --allow-fqdn-hostname --initial-dashboard-password Ceph_Crims", "cephadm shell", "ceph config set global 'enable_experimental_unrecoverable_data_corrupting_features' crimson", "ceph osd set-allow-crimson --yes-i-really-mean-it", "ceph config set mon osd_pool_default_crimson true", "dnf install libnbd git clone git://git.kernel.dk/fio.git cd fio ./configure --enable-libnbd make", "cd build ninja crimson-store-nbd", "export disk_img=/tmp/disk.img export unix_socket=/tmp/store_nbd_socket.sock rm -f USDdisk_img USDunix_socket truncate -s 512M USDdisk_img ./bin/crimson-store-nbd --device-path USDdisk_img --smp 1 --mkfs true --type transaction_manager --uds-path USD{unix_socket} & --smp is the CPU cores. --mkfs initializes the device first. --type is the backend.", "[global] ioengine=nbd uri=nbd+unix:///?socket=USD{unix_socket} rw=randrw time_based runtime=120 group_reporting iodepth=1 size=512M [job0] offset=0", "./fio nbd.fio", "git checkout master make crimson-osd ../src/script/run-cbt.sh --cbt ~/dev/cbt -a /tmp/baseline ../src/test/crimson/cbt/radosbench_4K_read.yaml git checkout topic make crimson-osd ../src/script/run-cbt.sh --cbt ~/dev/cbt -a /tmp/yap ../src/test/crimson/cbt/radosbench_4K_read.yaml", "~/dev/cbt/compare.py -b /tmp/baseline -a /tmp/yap -v", "ceph orch pause", "ceph orch set backend '' ceph mgr module disable cephadm", "ceph mgr module enable cephadm ceph orch set backend cephadm", "ceph orch ls --service_name SERVICE_NAME --format yaml", "ceph orch ls --service_name alertmanager --format yaml service_type: alertmanager service_name: alertmanager placement: hosts: - unknown_host status: running: 1 size: 1 events: - 2021-02-01T08:58:02.741162 service:alertmanager [INFO] \"service was created\" - '2021-02-01T12:09:25.264584 service:alertmanager [ERROR] \"Failed to apply: Cannot place <AlertManagerSpec for service_name=alertmanager> on unknown_host: Unknown hosts\"'", "ceph orch ps --service-name SERVICE_NAME --daemon-id DAEMON_ID --format yaml", "ceph orch ps --service-name mds --daemon-id cephfs.hostname.ppdhsz --format yaml daemon_type: mds daemon_id: cephfs.hostname.ppdhsz hostname: hostname status_desc: running events: - 2021-02-01T08:59:43.845866 daemon:mds.cephfs.hostname.ppdhsz [INFO] \"Reconfigured mds.cephfs.hostname.ppdhsz on host 'hostname'\"", "ceph -W cephadm", "ceph log last cephadm", "cephadm logs --name DAEMON_NAME", "cephadm logs --name cephfs.hostname.ppdhsz", "cephadm logs --fsid FSID --name DAEMON_NAME", "cephadm logs --fsid 2d2fd136-6df1-11ea-ae74-002590e526e8 --name cephfs.hostname.ppdhsz", "for name in USD(cephadm ls | python3 -c \"import sys, json; [print(i['name']) for i in json.load(sys.stdin)]\") ; do cephadm logs --fsid FSID_OF_CLUSTER --name \"USDname\" > USDname; done", "for name in USD(cephadm ls | python3 -c \"import sys, json; [print(i['name']) for i in json.load(sys.stdin)]\") ; do cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name \"USDname\" > USDname; done", "[root@host01 ~]USD systemctl status [email protected]", "podman ps -a --format json | jq '.[].Image' \"docker.io/library/rhel9\" \"registry.redhat.io/rhceph-alpha/rhceph-6-rhel9@sha256:9aaea414e2c263216f3cdcb7a096f57c3adf6125ec9f4b0f5f65fa8c43987155\"", "execnet.gateway_bootstrap.HostNotFound: -F /tmp/cephadm-conf-73z09u6g -i /tmp/cephadm-identity-ky7ahp_5 [email protected] raise OrchestratorError(msg) from e orchestrator._interface.OrchestratorError: Failed to connect to 10.10.1.2 (10.10.1.2). Please make sure that the host is reachable and accepts connections using the cephadm SSH key", "ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_private_key INFO:cephadm:Inferring fsid f8edc08a-7f17-11ea-8707-000c2915dd98 INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15 obtained 'mgr/cephadm/ssh_identity_key' chmod 0600 ~/cephadm_private_key", "chmod 0600 ~/cephadm_private_key", "cat ~/cephadm_private_key | ceph cephadm set-ssk-key -i-", "ceph cephadm get-ssh-config", "ssh -F config -i ~/cephadm_private_key root@host01", "ceph cephadm get-pub-key grep \"`cat ~/ceph.pub`\" /root/.ssh/authorized_keys", "ceph config set host public_network hostnetwork", "cephadm enter --name cephfs.hostname.ppdhsz ceph --admin-daemon /var/run/ceph/ceph-cephfs.hostname.ppdhsz.asok config show", "cephadm shell", "ceph config-key set mgr/cephadm/pause true", "ceph auth get-or-create mgr.host01.smfvfd1 mon \"profile mgr\" osd \"allow *\" mds \"allow *\" [mgr.host01.smfvfd1] key = AQDhcORgW8toCRAAlMzlqWXnh3cGRjqYEa9ikw==", "ceph config generate-minimal-conf minimal ceph.conf for 8c9b0072-67ca-11eb-af06-001a4a0002a0 [global] fsid = 8c9b0072-67ca-11eb-af06-001a4a0002a0 mon_host = [v2:10.10.200.10:3300/0,v1:10.10.200.10:6789/0] [v2:10.10.10.100:3300/0,v1:10.10.200.100:6789/0]", "ceph config get \"mgr.host01.smfvfd1\" container_image", "{ { \"config\": \"# minimal ceph.conf for 8c9b0072-67ca-11eb-af06-001a4a0002a0\\n[global]\\n\\tfsid = 8c9b0072-67ca-11eb-af06-001a4a0002a0\\n\\tmon_host = [v2:10.10.200.10:3300/0,v1:10.10.200.10:6789/0] [v2:10.10.10.100:3300/0,v1:10.10.200.100:6789/0]\\n\", \"keyring\": \"[mgr.Ceph5-2.smfvfd1]\\n\\tkey = AQDhcORgW8toCRAAlMzlqWXnh3cGRjqYEa9ikw==\\n\" } }", "exit", "cephadm --image registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest deploy --fsid 8c9b0072-67ca-11eb-af06-001a4a0002a0 --name mgr.host01.smfvfd1 --config-json config-json.json", "ceph -s", "ceph -W cephadm", "2022-06-10T17:51:36.335728+0000 mgr.Ceph5-1.nqikfh [INF] refreshing Ceph5-adm facts 2022-06-10T17:51:37.170982+0000 mgr.Ceph5-1.nqikfh [INF] deploying 1 monitor(s) instead of 2 so monitors may achieve consensus 2022-06-10T17:51:37.173487+0000 mgr.Ceph5-1.nqikfh [ERR] It is NOT safe to stop ['mon.Ceph5-adm']: not enough monitors would be available (Ceph5-2) after stopping mons [Ceph5-adm] 2022-06-10T17:51:37.174415+0000 mgr.Ceph5-1.nqikfh [INF] Checking pool \"nfs-ganesha\" exists for service nfs.foo 2022-06-10T17:51:37.176389+0000 mgr.Ceph5-1.nqikfh [ERR] Failed to apply nfs.foo spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'foo', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'nfs-ns'}): Cannot find pool \"nfs-ganesha\" for service nfs.foo Traceback (most recent call last): File \"/usr/share/ceph/mgr/cephadm/serve.py\", line 408, in _apply_all_services if self._apply_service(spec): File \"/usr/share/ceph/mgr/cephadm/serve.py\", line 509, in _apply_service config_func(spec) File \"/usr/share/ceph/mgr/cephadm/services/nfs.py\", line 23, in config self.mgr._check_pool_exists(spec.pool, spec.service_name()) File \"/usr/share/ceph/mgr/cephadm/module.py\", line 1840, in _check_pool_exists raise OrchestratorError(f'Cannot find pool \"{pool}\" for ' orchestrator._interface.OrchestratorError: Cannot find pool \"nfs-ganesha\" for service nfs.foo 2022-06-10T17:51:37.179658+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims -> {} 2022-06-10T17:51:37.180116+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims for drivegroup all-available-devices -> {} 2022-06-10T17:51:37.182138+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-adm 2022-06-10T17:51:37.182987+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-1 2022-06-10T17:51:37.183395+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-2 2022-06-10T17:51:43.373570+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring node-exporter.Ceph5-1 (unknown last config time) 2022-06-10T17:51:43.373840+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring daemon node-exporter.Ceph5-1 on Ceph5-1", "ceph config set mgr mgr/cephadm/log_to_cluster_level debug ceph -W cephadm --watch-debug ceph -W cephadm --verbose", "ceph config set mgr mgr/cephadm/log_to_cluster_level info", "ceph log last cephadm", "journalctl -u ceph-5c5a50ae-272a-455d-99e9-32c6a013e694@host01", "ceph config set global log_to_stderr false ceph config set global mon_cluster_log_to_stderr false", "ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true", "service_type: grafana service_name: grafana custom_configs: - mount_path: /etc/example.conf content: | setting1 = value1 setting2 = value2 - mount_path: /usr/share/grafana/example.cert content: | -----BEGIN PRIVATE KEY----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END CERTIFICATE-----", "ceph orch redeploy SERVICE_NAME", "ceph orch redeploy grafana", "ceph config set mgr mgr/cephadm/config_checks_enabled true", "ALL cephadm checks are disabled, use 'ceph config set mgr mgr/cephadm/config_checks_enabled true' to enable", "CEPHADM 8/8 checks enabled and executed (0 bypassed, 0 disabled). No issues detected", "ceph cephadm config-check status", "ceph cephadm config-check ls NAME HEALTHCHECK STATUS DESCRIPTION kernel_security CEPHADM_CHECK_KERNEL_LSM enabled checks SELINUX/Apparmor profiles are consistent across cluster hosts os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled checks subscription states are consistent for all cluster hosts public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a NIC on the Ceph public_netork osd_mtu_size CEPHADM_CHECK_MTU enabled check that OSD hosts share a common MTU setting osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common linkspeed network_missing CEPHADM_CHECK_NETWORK_MISSING enabled checks that the cluster/public networks defined exist on the Ceph hosts ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency - ceph daemons should be on the same release (unless upgrade is active) kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the MAJ.MIN of the kernel on Ceph hosts is consistent", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR", "[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml", "TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml", "TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :", "[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE", "[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/administration_guide/index
4.6.2. REAL SERVER Subsection
4.6.2. REAL SERVER Subsection Clicking on the REAL SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection. It displays the status of the physical server hosts for a particular virtual service. Figure 4.7. The REAL SERVER Subsection Click the ADD button to add a new server. To delete an existing server, select the radio button beside it and click the DELETE button. Click the EDIT button to load the EDIT REAL SERVER panel, as seen in Figure 4.8, "The REAL SERVER Configuration Panel" . Figure 4.8. The REAL SERVER Configuration Panel This panel consists of three entry fields: Name A descriptive name for the real server. Note This name is not the hostname for the machine, so make it descriptive and easily identifiable. Address The real server's IP address. Since the listening port is already specified for the associated virtual server, do not add a port number. Weight An integer value indicating this host's capacity relative to that of other hosts in the pool. The value can be arbitrary, but treat it as a ratio in relation to other real servers in the pool. For more on server weight, see Section 1.3.2, "Server Weight and Scheduling" . Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose any changes when selecting a new panel.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-piranha-virtservs-rs-vsa
4.61. filesystem
4.61. filesystem 4.61.1. RHBA-2011:0966 - filesystem bug fix update An updated filesystem package that fixes one bug is now available for Red Hat Enterprise Linux 6. The filesystem package is one of the basic packages that is installed on a Red Hat Enterprise Linux system. The filesystem package contains the basic directory layout for the Linux operating system, including the correct permissions for directories. Bug Fix BZ# 620063 Prior to this update, certain locale subdirectories in the /usr/share/locale/ directory did not have any owner set. With this update, this bug has been fixed so that the filesystem package now owns the subdirectories of the following locales: bg_BG (Bulgarian), en_NZ (New Zealand English), fi_FI (Finnish), gl_ES (Galician), lv_LV (Latvian), ms_MY (Malaysian), sr_RS (Serbian), en@shaw (Shavian), zh_CN.GB2312 (Chinese Simplified), sr@ijekavian (Serbian Jekavian), and sr@ijekavianlatin (Serbian Jekavian Latin). All users of filesystem are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/filesystem
Chapter 3. Using Keycloak
Chapter 3. Using Keycloak The Keycloak backend plugin, which integrates Keycloak into Developer Hub, has the following capabilities: Synchronization of Keycloak users in a realm. Synchronization of Keycloak groups and their users in a realm. 3.1. Importing users and groups in Developer Hub using the Keycloak plugin After configuring the plugin successfully, the plugin imports the users and groups each time when started. Note If you set up a schedule, users and groups will also be imported. After the first import is complete, you can select User to list the users from the catalog page: You can see the list of users on the page: When you select a user, you can see the information imported from Keycloak: You can also select a group, view the list, and select or view the information imported from Keycloak for a group:
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/using_dynamic_plugins/rhdh-keycloak_title-plugins-rhdh-using
20.40. Guest Virtual Machine CPU Model Configuration
20.40. Guest Virtual Machine CPU Model Configuration 20.40.1. Introduction Every hypervisor has its own policy for what a guest virtual machine will see for its CPUs by default. Whereas some hypervisors decide which CPU host physical machine features will be available for the guest virtual machine, QEMU/KVM presents the guest virtual machine with a generic model named qemu32 or qemu64 . These hypervisors perform more advanced filtering, classifying all physical CPUs into a handful of groups and have one baseline CPU model for each group that is presented to the guest virtual machine. Such behavior enables the safe migration of guest virtual machines between host physical machines, provided they all have physical CPUs that classify into the same group. libvirt does not typically enforce policy itself, rather it provides the mechanism on which the higher layers define their own required policy. Understanding how to obtain CPU model information and define a suitable guest virtual machine CPU model is critical to ensure guest virtual machine migration is successful between host physical machines. Note that a hypervisor can only emulate features that it is aware of and features that were created after the hypervisor was released may not be emulated. 20.40.2. Learning about the Host Physical Machine CPU Model The virsh capabilities command displays an XML document describing the capabilities of the hypervisor connection and host physical machine. The XML schema displayed has been extended to provide information about the host physical machine CPU model. One of the challenges in describing a CPU model is that every architecture has a different approach to exposing their capabilities. QEMU/KVM and libvirt use a scheme which combines a CPU model name string, with a set of named flags. It is not practical to have a database listing all known CPU models, so libvirt has a small list of baseline CPU model names. It chooses the one that shares the greatest number of CPUID bits with the actual host physical machine CPU and then lists the remaining bits as named features. Notice that libvirt does not display which features the baseline CPU contains. This might seem like a flaw at first, but as will be explained in this section, it is not actually necessary to know this information. 20.40.3. Determining Support for VFIO IOMMU Devices Use the virsh domcapabilities command to determine support for VFIO. See the following example output: # virsh domcapabilities [...output truncated...] <enum name='pciBackend'> <value>default</value> <value>vfio</value> [...output truncated...] Figure 20.3. Determining support for VFIO 20.40.4. Determining a Compatible CPU Model to Suit a Pool of Host Physical Machines Now that it is possible to find out what CPU capabilities a single host physical machine has, the step is to determine what CPU capabilities are best to expose to the guest virtual machine. If it is known that the guest virtual machine will never need to be migrated to another host physical machine, the host physical machine CPU model can be passed straight through unmodified. A virtualized data center may have a set of configurations that can guarantee all servers will have 100% identical CPUs. Again the host physical machine CPU model can be passed straight through unmodified. The more common case, though, is where there is variation in CPUs between host physical machines. In this mixed CPU environment, the lowest common denominator CPU must be determined. This is not entirely straightforward, so libvirt provides an API for exactly this task. If libvirt is provided a list of XML documents, each describing a CPU model for a host physical machine, libvirt will internally convert these to CPUID masks, calculate their intersection, and convert the CPUID mask result back into an XML CPU description. Here is an example of what libvirt reports as the capabilities on a basic workstation, when the virsh capabilities is executed: <capabilities> <host> <cpu> <arch>i686</arch> <model>pentium3</model> <topology sockets='1' cores='2' threads='1'/> <feature name='lahf_lm'/> <feature name='lm'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='ssse3'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pni'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='sse2'/> <feature name='acpi'/> <feature name='ds'/> <feature name='clflush'/> <feature name='apic'/> </cpu> </host> </capabilities> Figure 20.4. Pulling host physical machine's CPU model information Now compare that to a different server, with the same virsh capabilities command: <capabilities> <host> <cpu> <arch>x86_64</arch> <model>phenom</model> <topology sockets='2' cores='4' threads='1'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='misalignsse'/> <feature name='sse4a'/> <feature name='abm'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='lahf_lm'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='popcnt'/> <feature name='cx16'/> <feature name='ht'/> <feature name='vme'/> </cpu> ...snip... Figure 20.5. Generate CPU description from a random server To see if this CPU description is compatible with the workstation CPU description, use the virsh cpu-compare command. The reduced content was stored in a file named virsh-caps-workstation-cpu-only.xml and the virsh cpu-compare command can be executed on this file: As seen in this output, libvirt is correctly reporting that the CPUs are not strictly compatible. This is because there are several features in the server CPU that are missing in the client CPU. To be able to migrate between the client and the server, it will be necessary to open the XML file and comment out some features. To determine which features need to be removed, run the virsh cpu-baseline command, on the both-cpus.xml which contains the CPU information for both machines. Running # virsh cpu-baseline both-cpus.xml results in: <cpu match='exact'> <model>pentium3</model> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='lm'/> <feature policy='require' name='cx16'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pni'/> <feature policy='require' name='ht'/> <feature policy='require' name='sse2'/> <feature policy='require' name='clflush'/> <feature policy='require' name='apic'/> </cpu> Figure 20.6. Composite CPU baseline This composite file shows which elements are in common. Everything that is not in common should be commented out.
[ "virsh domcapabilities [...output truncated...] <enum name='pciBackend'> <value>default</value> <value>vfio</value> [...output truncated...]", "<capabilities> <host> <cpu> <arch>i686</arch> <model>pentium3</model> <topology sockets='1' cores='2' threads='1'/> <feature name='lahf_lm'/> <feature name='lm'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='ssse3'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pni'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='sse2'/> <feature name='acpi'/> <feature name='ds'/> <feature name='clflush'/> <feature name='apic'/> </cpu> </host> </capabilities>", "<capabilities> <host> <cpu> <arch>x86_64</arch> <model>phenom</model> <topology sockets='2' cores='4' threads='1'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='misalignsse'/> <feature name='sse4a'/> <feature name='abm'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='lahf_lm'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='popcnt'/> <feature name='cx16'/> <feature name='ht'/> <feature name='vme'/> </cpu> ...snip", "virsh cpu-compare virsh-caps-workstation-cpu-only.xml Host physical machine CPU is a superset of CPU described in virsh-caps-workstation-cpu-only.xml", "<cpu match='exact'> <model>pentium3</model> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='lm'/> <feature policy='require' name='cx16'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pni'/> <feature policy='require' name='ht'/> <feature policy='require' name='sse2'/> <feature policy='require' name='clflush'/> <feature policy='require' name='apic'/> </cpu>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-guest_virtual_machine_cpu_model_configuration
Chapter 3. A practical example of an Ansible Playbook
Chapter 3. A practical example of an Ansible Playbook Ansible can communicate with many different device classifications, from cloud-based REST APIs, to Linux and Windows systems, networking hardware, and much more. The following is a sample of two Ansible modules automatically updating two types of servers. 3.1. Playbook execution A playbook runs in order from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks with multiple 'plays' can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure, and so on. At a minimum, each play defines two things: the managed nodes to target, using a pattern at least one task to execute Note In Ansible 2.10 and later, use the fully-qualified collection name in your playbooks to ensure the correct module is selected, because multiple collections can contain modules with the same name (for example, user ). For further information, see Using collections in a playbook . In this example, the first play targets the web servers; the second play targets the database servers. --- - name: Update web servers hosts: webservers become: true tasks: - name: Ensure apache is at the latest version ansible.builtin.yum: name: httpd state: latest - name: Write the apache config file ansible.builtin.template: src: /srv/httpd.j2 dest: /etc/httpd.conf mode: "0644" - name: Update db servers hosts: databases become: true tasks: - name: Ensure postgresql is at the latest version ansible.builtin.yum: name: postgresql state: latest - name: Ensure that postgresql is started ansible.builtin.service: name: postgresql state: started The playbook contains two plays: The first checks if the web server software is up to date and runs the update if necessary. The second checks if database server software is up to date and runs the update if necessary. Your playbook can include more than just a hosts line and tasks. For example, this example playbook sets a remote_user for each play. This is the user account for the SSH connection. You can add other Playbook Keywords at the playbook, play, or task level to influence how Ansible behaves. Playbook keywords can control the connection plugin, whether to use privilege escalation, how to handle errors, and more. To support a variety of environments, Ansible enables you to set many of these parameters as command-line flags, in your Ansible configuration, or in your inventory. Learning the precedence rules for these sources of data can help you as you expand your Ansible ecosystem
[ "--- - name: Update web servers hosts: webservers become: true tasks: - name: Ensure apache is at the latest version ansible.builtin.yum: name: httpd state: latest - name: Write the apache config file ansible.builtin.template: src: /srv/httpd.j2 dest: /etc/httpd.conf mode: \"0644\" - name: Update db servers hosts: databases become: true tasks: - name: Ensure postgresql is at the latest version ansible.builtin.yum: name: postgresql state: latest - name: Ensure that postgresql is started ansible.builtin.service: name: postgresql state: started" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_ansible_playbooks/assembly-playbook-practical-example