title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. Enabling automatic screen lock | Chapter 5. Enabling automatic screen lock Enabling automatic screen lock is a security measure that helps protect your computer when it is left unattended. This feature ensures that your screen is locked after a specified period of inactivity, requiring a password or authentication to regain access. Procedure Open Settings . Click Privacy . Choose Screen Lock . Toggle the switch to enable automatic screen lock. Set the desired time interval for automatic screen lock delay. This interval defines how long your screen stays active after itthe screen is automatically locked. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/enabling-automatic-screen-lock_customizing-the-gnome-desktop-environment |
Chapter 3. Accessing hosts | Chapter 3. Accessing hosts Learn how to create a bastion host to access OpenShift Container Platform instances and access the control plane nodes with secure shell (SSH) access. 3.1. Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster The OpenShift Container Platform installer does not create any public IP addresses for any of the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for your OpenShift Container Platform cluster. To be able to SSH to your OpenShift Container Platform hosts, you must follow this procedure. Procedure Create a security group that allows SSH access into the virtual private cloud (VPC) created by the openshift-install command. Create an Amazon EC2 instance on one of the public subnets the installer created. Associate a public IP address with the Amazon EC2 instance that you created. Unlike with the OpenShift Container Platform installation, you should associate the Amazon EC2 instance you created with an SSH keypair. It does not matter what operating system you choose for this instance, as it will simply serve as an SSH bastion to bridge the internet into your OpenShift Container Platform cluster's VPC. The Amazon Machine Image (AMI) you use does matter. With Red Hat Enterprise Linux CoreOS (RHCOS), for example, you can provide keys via Ignition, like the installer does. After you provisioned your Amazon EC2 instance and can SSH into it, you must add the SSH key that you associated with your OpenShift Container Platform installation. This key can be different from the key for the bastion instance, but does not have to be. Note Direct SSH access is only recommended for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead. Run oc get nodes , inspect the output, and choose one of the nodes that is a master. The hostname looks similar to ip-10-0-1-163.ec2.internal . From the bastion SSH host you manually deployed into Amazon EC2, SSH into that control plane host. Ensure that you use the same SSH key you specified during the installation: USD ssh -i <ssh-key-path> core@<master-hostname> | [
"ssh -i <ssh-key-path> core@<master-hostname>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/accessing-hosts |
Chapter 22. Valgrind | Chapter 22. Valgrind Valgrind is an instrumentation framework for building dynamic analysis tools that can be used to profile applications in detail. The default installation already provides five standard tools. Valgrind tools are generally used to investigate memory management and threading problems. Valgrind provides instrumentation for user-space binaries to check for errors, such as the use of uninitialized memory, improper allocation/freeing of memory, and improper arguments for system calls. Its profiling tools can be used on most binaries; however, compared to other profilers, Valgrind profile runs are significantly slower. To profile a binary, Valgrind runs it inside a special virtual machine, which allows Valgrind to intercept all of the binary instructions. Valgrind 's tools are most useful for looking for memory-related issues in user-space programs; it is not suitable for debugging time-specific issues or kernel-space instrumentation and debugging. Valgrind reports are most useful and accurate when debuginfo packages are installed for the programs or libraries under investigation. See Section 20.1, "Enabling Debugging with Debugging Information" . 22.1. Valgrind Tools The Valgrind suite is composed of the following tools: memcheck This tool detects memory management problems in programs: By checking all reads from and writes to memory By intercepting memory manipulations like calls to malloc , free , new or delete memcheck is perhaps the most used Valgrind tool, as memory management problems can be difficult to detect using other means. Such problems often remain undetected for long periods, eventually causing crashes that are difficult to diagnose. memcheck functions as the default tool when no specific tool is selected. cachegrind cachegrind is a cache profiler that accurately pinpoints sources of cache misses in code by performing a detailed simulation of the I1, D1 and L2 caches in the CPU. It shows the number of cache misses, memory references, and instructions accruing to each line of source code; cachegrind also provides per-function, per-module, and whole-program summaries, and can even show counts for each individual machine instructions. callgrind Like cachegrind , callgrind can model cache behavior. However, the main purpose of callgrind is to record callgraphs data for the executed code. massif massif is a heap profiler; it measures how much heap memory a program uses, providing information on heap blocks, heap administration overheads, and stack sizes. Heap profilers are useful in finding ways to reduce heap memory usage. On systems that use virtual memory, programs with optimized heap memory usage are less likely to run out of memory, and may be faster as they require less paging. helgrind In programs that use the POSIX pthreads threading primitives, helgrind detects synchronization errors. Such errors are: Misuses of the POSIX pthreads API Potential deadlocks arising from lock ordering problems Data races (that is, accessing memory without adequate locking) 22.2. Using Valgrind The valgrind package and its dependencies install all the necessary tools for performing a Valgrind profile run. To profile a program with Valgrind , use: See Section 22.1, "Valgrind Tools" for a list of arguments for toolname . In addition to the suite of Valgrind tools, none is also a valid argument for toolname ; this argument allows you to run a program under Valgrind without performing any profiling. This is useful for debugging or benchmarking Valgrind itself. You can also instruct Valgrind to send all of its information to a specific file. To do so, use the option --log-file= filename . For example, to check the memory usage of the executable file hello and send profile information to output , use: See Section 22.3, "Additional information" for more information on Valgrind , along with other available documentation on the Valgrind suite of tools. 22.3. Additional information For more extensive information on Valgrind , see man valgrind . Red Hat Enterprise Linux also provides a comprehensive Valgrind Documentation book available as PDF and HTML in: /usr/share/doc/valgrind- version /valgrind_manual.pdf /usr/share/doc/valgrind- version /html/index.html | [
"valgrind --tool= toolname program",
"valgrind --tool=memcheck --log-file=output hello"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/valgrind |
Chapter 2. Installing Red Hat Developer Hub on OpenShift Dedicated on GCP using the Helm Chart | Chapter 2. Installing Red Hat Developer Hub on OpenShift Dedicated on GCP using the Helm Chart You can install Developer Hub on OpenShift Dedicated on GCP using the Red Hat Developer Hub Helm Chart. Prerequisites You have a valid GCP account. Your OpenShift Dedicated cluster is running on GCP. For more information, see Creating a cluster on GCP in Red Hat OpenShift Dedicated documentation. You have installed Helm 3 or the latest. Procedure From the Developer perspective on the Developer Hub web console, click +Add . From the Developer Catalog panel, click Helm Chart . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub card. From the Red Hat Developer Hub page, click Create . From your cluster, copy the OpenShift Container Platform router host (for example: apps.<clusterName>.com ). Select the radio button to configure the Developer Hub instance with either the form view or YAML view. The Form view is selected by default. Using Form view To configure the instance with the Form view, go to Root Schema global Enable service authentication within Backstage instance and paste your OpenShift Container Platform router host into the field on the form. Using YAML view To configure the instance with the YAML view, paste your OpenShift Container Platform router hostname in the global.clusterRouterBase parameter value as shown in the following example: global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations Edit the other values if needed, then click Create and wait for the database and Developer Hub to start. Verification To access the the Developer Hub, click the Open URL icon. Additional resources Configuring Customizing | [
"global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_openshift_dedicated_on_google_cloud_platform/proc-install-rhdh-osd-gcp-helm_title-install-rhdh-osd-gcp |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 0.10-06 Tue 03 Mar 2020 Marc Muehlfeld Added the Configuring Policy-based Routing to Define Alternative Routes section. Revision 0.10-05 Fri 22 Nov 2019 Marc Muehlfeld Rewrote the Configuring the Squid Caching Proxy Server chapter. Revision 0.10-04 Tue 06 Aug 2019 Marc Muehlfeld Version for 7.7 GA publication. Revision 0.10-03 Thu 22 Mar 2018 Ioanna Gkioka Version for 7.5 GA publication. Revision 0.10-02 Mon 14 Aug 2017 Ioanna Gkioka Async release with misc. updates Revision 0.10-01 Tue 25 Jul 2017 Mirek Jahoda Version for 7.4 GA publication. Revision 0.9-30 Tue 18 Oct 2016 Mirek Jahoda Version for 7.3 GA publication. Revision 0.9-25 Wed 11 Nov 2015 Jana Heves Version for 7.2 GA release. Revision 0.9-15 Tue 17 Feb 2015 Christian Huffman Version for 7.1 GA release Revision 0.9-14 Fri Dec 05 2014 Christian Huffman Updated the nmtui and NetworkManager GUI sections. Revision 0.9-12 Wed Nov 05 2014 Stephen Wadeley Improved IP Networking , 802.1Q VLAN tagging , and Teaming . Revision 0.9-11 Tues Oct 21 2014 Stephen Wadeley Improved Bonding , Bridging , and Teaming . Revision 0.9-9 Tue Sep 2 2014 Stephen Wadeley Improved Bonding and Consistent Network Device Naming . Revision 0.9-8 Tue July 8 2014 Stephen Wadeley Red Hat Enterprise Linux 7.0 GA release of the Networking Guide. Revision 0-0 Wed Dec 12 2012 Stephen Wadeley Initialization of the Red Hat Enterprise Linux 7 Networking Guide. B.1. Acknowledgments Certain portions of this text first appeared in the Red Hat Enterprise Linux 6 Deployment Guide , | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/app-revision_history |
Monitoring and managing system status and performance | Monitoring and managing system status and performance Red Hat Enterprise Linux 8 Optimizing system throughput, latency, and power consumption Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/index |
Installing IBM Cloud Bare Metal (Classic) | Installing IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.15 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>",
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt kni",
"sudo systemctl start firewalld",
"sudo systemctl enable firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"PRVN_HOST_ID=<ID>",
"ibmcloud sl hardware list",
"PUBLICSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRIVSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)",
"PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)",
"PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR",
"PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)",
"PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)",
"PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)",
"PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR",
"PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)",
"sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2",
"vim pull-secret.txt",
"sudo dnf install dnsmasq",
"sudo vi /etc/dnsmasq.conf",
"interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r",
"ibmcloud sl hardware list",
"ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null",
"\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"",
"sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile",
"00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1",
"sudo systemctl start dnsmasq",
"sudo systemctl enable dnsmasq",
"sudo systemctl status dnsmasq",
"● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k",
"sudo firewall-cmd --add-port 53/udp --permanent",
"sudo firewall-cmd --add-port 67/udp --permanent",
"sudo firewall-cmd --change-zone=provisioning --zone=external --permanent",
"sudo firewall-cmd --reload",
"export VERSION=stable-4.15",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_ibm_cloud_bare_metal_classic/index |
Chapter 9. 3scale API Management backup and restore | Chapter 9. 3scale API Management backup and restore Note Red Hat 3scale API Management backup and restore is deprecated and no longer the focus of development. Refer to OpenShift APIs for Data Protection information and instructions. This section provides you, as the administrator of a 3scale installation, the information needed to: Set up the backup procedures for persistent data. Perform a restore from backup of the persistent data. In case of issues with one or more of the MySQL databases, you will be able to restore 3scale correctly to its operational state. 9.1. Prerequisites A 3scale 2.15 instance. For more information about how to install 3scale, see Installing 3scale API Management on OpenShift . An OpenShift Container Platform 4.x user account with one of the following roles in the OpenShift cluster: cluster-admin admin edit Note A user with an edit cluster role locally binded in the namespace of a 3scale installation can perform backup and restore procedures. The following contains information about how to set up the backup procedures for persistent data, perform a restore from backup of the persistent data. In case of a failure with one or more of the MySQL databases, I will then be able to restore 3scale correctly to its operational state. Persistent volumes and considerations Using data sets Backing up system databases Restoring system databases 9.2. Persistent volumes and considerations Persistent volumes In a 3scale API Management deployment on OpenShift : A persistent volume (PV) provided to the cluster by the underlying infrastructure. Storage service external to the cluster. This can be in the same data center or elsewhere. Considerations The backup and restore procedures for persistent data vary depending on the storage type in use. To ensure the backups and restores preserve data consistency, it is not sufficient to backup the underlying PVs for a database. For example, do not capture only partial writes and partial transactions. Use the database's backup mechanisms instead. Some parts of the data are synchronized between different components. One copy is considered the source of truth for the data set. The other is a copy that is not modified locally, but synchronized from the source of truth . In these cases, upon completion, the source of truth should be restored, and copies in other components synchronized from it. 9.3. Using data sets This section explains in more detail about different data sets in the different persistent stores, their purpose, the storage type used, and whether it is the source of truth . The full state of a 3scale deployment is stored across the following Deployment objects and their PVs: Name Description system-mysql MySQL database ( mysql-storage ) system-storage Volume for Files backend-redis Redis database ( backend-redis-storage ) system-redis Redis database ( system-redis-storage ) 9.3.1. Defining system-mysql system-mysql is a relational database which stores information about users, accounts, APIs, plans, and more, in the 3scale Admin Console. A subset of this information related to services is synchronized to the Backend component and stored in backend-redis . system-mysql is the source of truth for this information. 9.3.2. Defining system-storage system-storage stores files to be read and written by the System component. They fall into two categories: Configuration files read by the System component at run-time Static files, for example, HTML, CSS, JS , uploaded to system by its CMS feature, for the purpose of creating a Developer Portal Note System can be scaled horizontally with multiple pods uploading and reading said static files, hence the need for a ReadWriteMany (RWX) PersistentVolume . 9.3.3. Defining backend-redis backend-redis contains multiple data sets used by the Backend component: Usages : This is API usage information aggregated by Backend . It is used by Backend for rate-limiting decisions and by System to display analytics information in the UI or via API. Config : This is configuration information about services, rate-limits, and more, that is synchronized from System via an internal API. This is not the source of truth of this information, however System and system-mysql is. Queues : This is queues of background jobs to be executed by worker processes. These are ephemeral and are deleted once processed. 9.3.4. Defining system-redis system-redis contains queues for jobs to be processed in background. These are ephemeral and are deleted once processed. 9.4. Backing up system databases The following commands are in no specific order and can be used as you need them to back up and archive system databases. 9.4.1. Backing up system-mysql Execute MySQL Backup Command: 9.4.2. Backing up system-storage Archive the system-storage files to another storage: 9.4.3. Backing up backend-redis Backup the dump.rdb file from redis: 9.4.4. Backing up system-redis Backup the dump.rdb file from redis: 9.4.5. Backing up zync-database Backup the zync_production database: 9.4.6. Backing up OpenShift secrets and ConfigMaps The following is the list of commands for OpenShift secrets and ConfigMaps: 9.4.6.1. OpenShift secrets 9.4.6.2. ConfigMaps 9.5. Restoring system databases Important Prevent record creation by scaling down pods like system-app or disabling routes. In the commands and snippets examples that follow, replace USD{DEPLOYMENT_NAME} with the name you defined when you created your 3scale deployment. Note Ensure the output includes at least a pair of braces {} and is not empty. Procedure Store current number of replicas to scale up later: SYSTEM_SPEC=`oc get APIManager/USD{DEPLOYMENT_NAME} -o jsonpath='{.spec.system.appSpec}'` Verify the result of the command and check the content of USDSYSTEM_SPEC : echo USDSYSTEM_SPEC Patch the APIManager CR using the following command that scales the number of replicas to 0 : USD oc patch APIManager/USD{DEPLOYMENT_NAME} --type merge -p '{"spec": {"system": {"appSpec": {"replicas": 0}}}}' Alternatively, to scale down system-app , edit the existing APIManager/USD{DEPLOMENT_NAME} and set the number of system replicas to zero as shown in the following example: apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: <DEPLOYMENT_NAME> spec: system: appSpec: replicas: 0 Use the following procedures to restore OpenShift secrets and system databases: Restoring an operator-based deployment Restoring system-mysql Restoring system-storage Restoring zync-database Ensuring information consistency between backend and system 9.5.1. Restoring an operator-based deployment Use the following steps to restore operator-based deployments. Procedure Install the 3scale API Management operator on OpenShift . Restore secrets before creating an APIManager resource: USD oc apply -f system-smtp.json USD oc apply -f system-seed.json USD oc apply -f system-database.json USD oc apply -f backend-internal-api.json USD oc apply -f system-events-hook.json USD oc apply -f system-app.json USD oc apply -f system-recaptcha.json USD oc apply -f system-redis.json USD oc apply -f zync.json USD oc apply -f system-master-apicast.json Restore ConfigMaps before creating an APIManager resource: USD oc apply -f system-environment.json USD oc apply -f apicast-environment.json Deploy 3scale API Management with the operator using the APIManager CR. 9.5.2. Restoring system-mysql Procedure Copy the MySQL dump to the system-mysql pod: USD oc cp ./system-mysql-backup.gz USD(oc get pods -l 'deployment=system-mysql' -o json | jq '.items[0].metadata.name' -r):/var/lib/mysql Decompress the backup file: USD oc rsh USD(oc get pods -l 'deployment=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d USD{HOME}/system-mysql-backup.gz' Restore the MySQL DB Backup file: USD oc rsh USD(oc get pods -l 'deployment=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=USD{MYSQL_ROOT_PASSWORD}; mysql -hsystem-mysql -uroot system < USD{HOME}/system-mysql-backup' 9.5.3. Restoring system-storage Restore the Backup file to system-storage: USD oc rsync ./local/dir/system/ USD(oc get pods -l 'deployment=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system 9.5.4. Restoring zync-database Instructions to restore zync-database for a 3scale operator deployment. 9.5.4.1. Operator-based deployments Note Follow the instructions under Deploying 3scale API Management using the operator , in particular Deploying the APIManager CR to redeploy your 3scale instance. Procedure Store the number of replicas, by replacing USD{DEPLOYMENT_NAME} with the name you defined when you created your 3scale deployment: Scale down the zync Deployment to 0 pods: USD oc patch APIManager/USD{DEPLOYMENT_NAME} --type merge -p '{"spec": {"zync": {"appSpec": {"replicas": 0}, "queSpec": {"replicas": 0}}}}' Copy the zync database dump to the zync-database pod: USD oc cp ./zync-database-backup.gz USD(oc get pods -l 'deployment=zync-database' -o json | jq '.items[0].metadata.name' -r):/var/lib/pgsql/ Decompress the backup file: USD oc rsh USD(oc get pods -l 'deployment=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d USD{HOME}/zync-database-backup.gz' Restore zync database backup file: USD oc rsh USD(oc get pods -l 'deployment=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'psql zync_production -f USD{HOME}/zync-database-backup' Restore to the original count of replicas: USD oc patch APIManager/USD{DEPLOYMENT_NAME} --type json -p '[{"op": "replace", "path": "/spec/zync", "value":'"USDZYNC_SPEC"'}]' If the output of following command does not contain the replicas key: Then, run the following additional command to scale up zync : 9.5.4.2. Restoring 3scale API Management options with backend-redis and system-redis By restoring 3scale, you will restore backend-redis and system-redis . These components have the following functions: * backend-redis : The database that supports application authentication and rate limiting in 3scale. It is also used for statistics storage and temporary job storage. * system-redis : Provides temporary storage for background jobs for 3scale and is also used as a message bus for Ruby processes of system-app pods. The backend-redis component The backend-redis component has two databases, data and queues . In default 3scale deployment, data and queues are deployed in the Redis database, but in different logical database indexes /0 and /1 . Restoring data database runs without any issues, however restoring queues database can lead to duplicated jobs. Regarding duplication of jobs, in 3scale the backend workers process background jobs in a matter of milliseconds. If backend-redis fails 30 seconds after the last database snapshot and you try to restore it, the background jobs that happened during those 30 seconds are performed twice because backend does not have a system in place to avoid duplication. In this scenario, you must restore the backup as the /0 database index contains data that is not saved anywhere else. Restoring /0 database index means that you must also restore the /1 database index since one cannot be stored without the other. When you choose to separate databases on different servers and not one database in different indexes, the size of the queue will be approximately zero, so it is preferable not to restore backups and lose a few background jobs. This will be the case in a 3scale Hosted setup you will need to therefore apply different backup and restore strategies for both. The `system-redis`component The majority of the 3scale system background jobs are idempotent, that is, identical requests return an identical result no matter how many times you run them. The following is a list of examples of events handled by background jobs in system: Notification jobs such as plan trials about to expire, credit cards about to expire, activation reminders, plan changes, invoice state changes, PDF reports. Billing such as invoicing and charging. Deletion of complex objects. Backend synchronization jobs. Indexation jobs, for example with searchd. Sanitisation jobs, for example invoice IDs. Janitorial tasks such as purging audits, user sessions, expired tokens, log entries, suspending inactive accounts. Traffic updates. Proxy configuration change monitoring and proxy deployments. Background signup jobs, Zync jobs such as single sign-on (SSO) synchronization, routes creation. If you are restoring the above list of background jobs, 3scale's system maintains the state of each restored job. It is important to check the integrity of the system after the restoration is complete. 9.5.5. Ensuring information consistency between backend and system After restoring backend-redis a sync of the Config information from system should be forced to ensure the information in backend is consistent with that in system , which is the source of truth . 9.5.5.1. Managing the deployment configuration for backend-redis These steps are intended for running instances of backend-redis . Procedure Edit the redis-config configmap: USD oc edit configmap redis-config Comment SAVE commands in the redis-config configmap: Set appendonly to no in the redis-config configmap: Redeploy backend-redis to load the new configurations: USD oc rollout restart deployment/backend-redis Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/backend-redis Rename the dump.rdb file: USD oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/dump.rdb USD{HOME}/data/dump.rdb-old' Rename the appendonly.aof file: USD oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/appendonly.aof USD{HOME}/data/appendonly.aof-old' Move the backup file to the POD: USD oc cp ./backend-redis-dump.rdb USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb Redeploy backend-redis to load the backup: USD oc rollout restart deployment/backend-redis Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/backend-redis Create the appendonly file: USD oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF' After a while, ensure that the AOF rewrite is complete: USD oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress While aof_rewrite_in_progress = 1 , the execution is in progress. Check periodically until aof_rewrite_in_progress = 0 . Zero indicates that the execution is complete. Edit the redis-config configmap: USD oc edit configmap redis-config Uncomment SAVE commands in the redis-config configmap: Set appendonly to yes in the redis-config configmap: Redeploy backend-redis to reload the default configurations: USD oc rollout restart deployment/backend-redis Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/backend-redis 9.5.5.2. Managing the deployment configuration for system-redis These steps are intended for running instances of system-redis . Procedure Edit the redis-config configmap: USD oc edit configmap redis-config Comment SAVE commands in the redis-config configmap: Set appendonly to no in the redis-config configmap: Redeploy system-redis to load the new configurations: USD oc rollout restart deployment/system-redis Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-redis Rename the dump.rdb file: USD oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/dump.rdb USD{HOME}/data/dump.rdb-old' Rename the appendonly.aof file: USD oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/appendonly.aof USD{HOME}/data/appendonly.aof-old' Move the Backup file to the POD: USD oc cp ./system-redis-dump.rdb USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb Redeploy system-redis to load the backup: USD oc rollout restart deployment/system-redis Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-redis Create the appendonly file: USD oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF' After a while, ensure that the AOF rewrite is complete: USD oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress While aof_rewrite_in_progress = 1 , the execution is in progress. Check periodically until aof_rewrite_in_progress = 0 . Zero indicates that the execution is complete. Edit the redis-config configmap: USD oc edit configmap redis-config Uncomment SAVE commands in the redis-config configmap: Set appendonly to yes in the redis-config configmap: Redeploy system-redis to reload the default configurations: USD oc rollout restart deployment/system-redis Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-redis 9.5.6. Restoring backend-worker These steps are intended to restore backend-worker . Procedure Restore to the latest version of backend-worker : USD oc rollout restart deployment/backend-worker Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/backend-worker 9.5.7. Restoring system-app These steps are intended to restore system-app . Procedure To scale up system-app , edit the existing APIManager/USD{DEPLOYMENT_NAME} and change .spec.system.appSpec.replicas back to original number of replicas or run the following command to apply previously stored specification: USD oc patch APIManager/USD{DEPLOYMENT_NAME} --type json -p '[{"op": "replace", "path": "/spec/system/appSpec", "value":'"USDSYSTEM_SPEC"'}]' If the output of following command does not contain the replicas key: Then, run the following additional command to scale up system-app : Restore to the latest version of system-app : USD oc rollout restart deployment/system-app Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-app 9.5.8. Restoring system-sidekiq These steps are intended to restore system-sidekiq . Procedure Restore to the latest version of system-sidekiq : USD oc rollout restart deployment/system-sidekiq Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-sidekiq 9.5.8.1. Restoring system-searchd These steps are intended to restore system-searchd . Procedure Restore to the latest version of system-searchd : USD oc rollout restart deployment/system-searchd Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-searchd 9.5.8.2. Restoring OpenShift routes managed by zync Force zync to recreate missing OpenShift routes: USD oc rsh USD(oc get pods -l 'deployment=system-sidekiq' -o json | jq '.items[0].metadata.name' -r) bash -c 'bundle exec rake zync:resync:domains' | [
"oc rsh USD(oc get pods -l 'deployment=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=USD{MYSQL_ROOT_PASSWORD}; mysqldump --single-transaction -hsystem-mysql -uroot system' | gzip > system-mysql-backup.gz",
"oc rsync USD(oc get pods -l 'deployment=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system ./local/dir",
"oc cp USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb",
"oc cp USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb",
"oc rsh USD(oc get pods -l 'deployment=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'pg_dump zync_production' | gzip > zync-database-backup.gz",
"oc get secrets system-smtp -o json > system-smtp.json oc get secrets system-seed -o json > system-seed.json oc get secrets system-database -o json > system-database.json oc get secrets backend-internal-api -o json > backend-internal-api.json oc get secrets system-events-hook -o json > system-events-hook.json oc get secrets system-app -o json > system-app.json oc get secrets system-recaptcha -o json > system-recaptcha.json oc get secrets system-redis -o json > system-redis.json oc get secrets zync -o json > zync.json oc get secrets system-master-apicast -o json > system-master-apicast.json",
"oc get configmaps system-environment -o json > system-environment.json oc get configmaps apicast-environment -o json > apicast-environment.json",
"SYSTEM_SPEC=`oc get APIManager/USD{DEPLOYMENT_NAME} -o jsonpath='{.spec.system.appSpec}'`",
"echo USDSYSTEM_SPEC",
"oc patch APIManager/USD{DEPLOYMENT_NAME} --type merge -p '{\"spec\": {\"system\": {\"appSpec\": {\"replicas\": 0}}}}'",
"apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: <DEPLOYMENT_NAME> spec: system: appSpec: replicas: 0",
"oc apply -f system-smtp.json oc apply -f system-seed.json oc apply -f system-database.json oc apply -f backend-internal-api.json oc apply -f system-events-hook.json oc apply -f system-app.json oc apply -f system-recaptcha.json oc apply -f system-redis.json oc apply -f zync.json oc apply -f system-master-apicast.json",
"oc apply -f system-environment.json oc apply -f apicast-environment.json",
"oc cp ./system-mysql-backup.gz USD(oc get pods -l 'deployment=system-mysql' -o json | jq '.items[0].metadata.name' -r):/var/lib/mysql",
"oc rsh USD(oc get pods -l 'deployment=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d USD{HOME}/system-mysql-backup.gz'",
"oc rsh USD(oc get pods -l 'deployment=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=USD{MYSQL_ROOT_PASSWORD}; mysql -hsystem-mysql -uroot system < USD{HOME}/system-mysql-backup'",
"oc rsync ./local/dir/system/ USD(oc get pods -l 'deployment=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system",
"ZYNC_SPEC=`oc get APIManager/USD{DEPLOYMENT_NAME} -o json | jq -r '.spec.zync'`",
"oc patch APIManager/USD{DEPLOYMENT_NAME} --type merge -p '{\"spec\": {\"zync\": {\"appSpec\": {\"replicas\": 0}, \"queSpec\": {\"replicas\": 0}}}}'",
"oc cp ./zync-database-backup.gz USD(oc get pods -l 'deployment=zync-database' -o json | jq '.items[0].metadata.name' -r):/var/lib/pgsql/",
"oc rsh USD(oc get pods -l 'deployment=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d USD{HOME}/zync-database-backup.gz'",
"oc rsh USD(oc get pods -l 'deployment=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'psql zync_production -f USD{HOME}/zync-database-backup'",
"oc patch APIManager/USD{DEPLOYMENT_NAME} --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/zync\", \"value\":'\"USDZYNC_SPEC\"'}]'",
"echo USDZYNC_SPEC",
"oc patch deployment/zync -p '{\"spec\": {\"replicas\": 1}}'",
"oc edit configmap redis-config",
"#save 900 1 #save 300 10 #save 60 10000",
"appendonly no",
"oc rollout restart deployment/backend-redis",
"oc rollout status deployment/backend-redis",
"oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/dump.rdb USD{HOME}/data/dump.rdb-old'",
"oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/appendonly.aof USD{HOME}/data/appendonly.aof-old'",
"oc cp ./backend-redis-dump.rdb USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb",
"oc rollout restart deployment/backend-redis",
"oc rollout status deployment/backend-redis",
"oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'",
"oc rsh USD(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress",
"oc edit configmap redis-config",
"save 900 1 save 300 10 save 60 10000",
"appendonly yes",
"oc rollout restart deployment/backend-redis",
"oc rollout status deployment/backend-redis",
"oc edit configmap redis-config",
"#save 900 1 #save 300 10 #save 60 10000",
"appendonly no",
"oc rollout restart deployment/system-redis",
"oc rollout status deployment/system-redis",
"oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/dump.rdb USD{HOME}/data/dump.rdb-old'",
"oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv USD{HOME}/data/appendonly.aof USD{HOME}/data/appendonly.aof-old'",
"oc cp ./system-redis-dump.rdb USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb",
"oc rollout restart deployment/system-redis",
"oc rollout status deployment/system-redis",
"oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'",
"oc rsh USD(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress",
"oc edit configmap redis-config",
"save 900 1 save 300 10 save 60 10000",
"appendonly yes",
"oc rollout restart deployment/system-redis",
"oc rollout status deployment/system-redis",
"oc rollout restart deployment/backend-worker",
"oc rollout status deployment/backend-worker",
"oc patch APIManager/USD{DEPLOYMENT_NAME} --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/system/appSpec\", \"value\":'\"USDSYSTEM_SPEC\"'}]'",
"echo USDSYSTEM_SPEC",
"oc patch deployment/system-app -p '{\"spec\": {\"replicas\": 1}}'",
"oc rollout restart deployment/system-app",
"oc rollout status deployment/system-app",
"oc rollout restart deployment/system-sidekiq",
"oc rollout status deployment/system-sidekiq",
"oc rollout restart deployment/system-searchd",
"oc rollout status deployment/system-searchd",
"oc rsh USD(oc get pods -l 'deployment=system-sidekiq' -o json | jq '.items[0].metadata.name' -r) bash -c 'bundle exec rake zync:resync:domains'"
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/threescale-backup-restore |
Appendix B. About Service Interconnect documentation | Appendix B. About Service Interconnect documentation Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Revised on 2025-02-24 19:04:53 UTC | null | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/installation/about-documentation |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/autoscaling_for_instances/proc_providing-feedback-on-red-hat-documentation |
Chapter 5. Customizing the Learning Paths in Red Hat Developer Hub | Chapter 5. Customizing the Learning Paths in Red Hat Developer Hub In Red Hat Developer Hub, you can configure Learning Paths by passing the data into the app-config.yaml file as a proxy. The base URL must include the /developer-hub/learning-paths proxy. Note Due to the use of overlapping pathRewrites for both the learning-path and homepage quick access proxies, you must create the learning-paths configuration ( ^api/proxy/developer-hub/learning-paths ) before you create the homepage configuration ( ^/api/proxy/developer-hub ). For more information about customizing the Home page in Red Hat Developer Hub, see Customizing the Home page in Red Hat Developer Hub . You can provide data to the Learning Path from the following sources: JSON files hosted on GitHub or GitLab. A dedicated service that provides the Learning Path data in JSON format using an API. 5.1. Using hosted JSON files to provide data to the Learning Paths Prerequisites You have installed Red Hat Developer Hub by using either the Operator or Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To access the data from the JSON files, complete the following step: Add the following code to the app-config.yaml file: proxy: endpoints: '/developer-hub': target: https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/learning-paths': '/redhat-developer/rhdh/main/packages/app/public/learning-paths/data.json' '^/api/proxy/developer-hub/tech-radar': '/redhat-developer/rhdh/main/packages/app/public/tech-radar/data-default.json' '^/api/proxy/developer-hub': '/redhat-developer/rhdh/main/packages/app/public/homepage/data.json' changeOrigin: true secure: true 5.2. Using a dedicated service to provide data to the Learning Paths When using a dedicated service, you can do the following: Use the same service to provide the data to all configurable Developer Hub pages or use a different service for each page. Use the red-hat-developer-hub-customization-provider as an example service, which provides data for both the Home and Tech Radar pages. The red-hat-developer-hub-customization-provider service provides the same data as default Developer Hub data. You can fork the red-hat-developer-hub-customization-provider service repository from GitHub and modify it with your own data, if required. Deploy the red-hat-developer-hub-customization-provider service and the Developer Hub Helm chart on the same cluster. Prerequisites You have installed the Red Hat Developer Hub using Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To use a dedicated service to provide the Learning Path data, complete the following steps: Add the following code to the app-config-rhdh.yaml file: proxy: endpoints: # Other Proxies '/developer-hub/learning-paths': target: USD{LEARNING_PATH_DATA_URL} changeOrigin: true # Change to "false" in case of using self hosted cluster with a self-signed certificate secure: true where the LEARNING_PATH_DATA_URL is defined as http://<SERVICE_NAME>/learning-paths , for example, http://rhdh-customization-provider/learning-paths . Note You can define the LEARNING_PATH_DATA_URL by adding it to rhdh-secrets or by directly replacing it with its value in your custom ConfigMap. Delete the Developer Hub pod to ensure that the new configurations are loaded correctly. | [
"proxy: endpoints: '/developer-hub': target: https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/learning-paths': '/redhat-developer/rhdh/main/packages/app/public/learning-paths/data.json' '^/api/proxy/developer-hub/tech-radar': '/redhat-developer/rhdh/main/packages/app/public/tech-radar/data-default.json' '^/api/proxy/developer-hub': '/redhat-developer/rhdh/main/packages/app/public/homepage/data.json' changeOrigin: true secure: true",
"proxy: endpoints: # Other Proxies '/developer-hub/learning-paths': target: USD{LEARNING_PATH_DATA_URL} changeOrigin: true # Change to \"false\" in case of using self hosted cluster with a self-signed certificate secure: true"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/customizing/proc-customize-rhdh-learning-paths_configuring-templates |
Chapter 8. Virtualizing Red Hat Enterprise Linux on Other Platforms | Chapter 8. Virtualizing Red Hat Enterprise Linux on Other Platforms This chapter contains useful reference material for customers running Red Hat Enterprise Linux 6 as a virtualized operating system on other virtualization hosts. 8.1. On VMware ESX Red Hat Enterprise Linux 6.0 and onward provide the vmw_balloon driver, a paravirtualized memory ballooning driver used when running Red Hat Enterprise Linux on VMware hosts. For further information about this driver, refer to http://kb.VMware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1002586 . Red Hat Enterprise Linux 6.3 and onward provide the vmmouse_drv driver, a paravirtualized mouse driver used when running Red Hat Enterprise Linux on VMware hosts. For further information about this driver, refer to http://kb.VMware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=5739104 . Red Hat Enterprise Linux 6.3 and onward provide the vmware_drv driver, a paravirtualized video driver used when running Red Hat Enterprise Linux on VMware hosts. For further information about this driver, refer to http://kb.VMware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1033557 . Red Hat Enterprise Linux 6.3 and onward provide the vmxnet3 driver, a paravirtualized network adapter used when running Red Hat Enterprise Linux on VMware hosts. For further information about this driver, refer to http://kb.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001805 . Red Hat Enterprise Linux 6.4 and onward provide the vmw_pvscsi driver, a paravirtualized SCSI adapter used when running Red Hat Enterprise Linux on VMware hosts. For further information about this driver, refer to http://kb.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010398 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-rhel_virt_on_other_platforms |
Chapter 10. Installing a cluster on Azure in a restricted network with user-provisioned infrastructure | Chapter 10. Installing a cluster on Azure in a restricted network with user-provisioned infrastructure In OpenShift Container Platform, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you must use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you have manually created long-term credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 10.1. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 10.1.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 10.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.2. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 10.2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage 10.2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 10.2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 10.2.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.2.5. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 10.2.6. Required Azure permissions for user-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 10.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 10.2. Required permissions for creating compute resources Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/deallocate/action Example 10.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 10.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Example 10.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 10.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 10.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 10.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 10.9. Required permissions for creating deployments Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/deployments/validate/action Microsoft.Resources/deployments/operationstatuses/read Example 10.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/write Example 10.11. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 10.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. Example 10.13. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 10.14. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/images/delete Example 10.15. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 10.16. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Example 10.17. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 10.18. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 10.19. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions related to resource group creation to your subscription. After the resource group is created, you can scope the rest of the permissions to the created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 10.2.7. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. If you want to use a custom role, you have created a custom role with the required permissions listed in the Required Azure permissions for user-provisioned infrastructure section. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 10.2.8. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 10.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 10.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 10.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 10.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 10.3.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 10.20. Machine types based on 64-bit x86 architecture standardBSFamily standardDADSv5Family standardDASv4Family standardDASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHCSFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 10.3.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 10.21. Machine types based on 64-bit ARM architecture standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 10.4. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your worker nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "413.92.2023101700" } ... } ... } 10.4.1. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 10.4.2. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 10.5. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 10.5.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 10.5.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. 10.5.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.5.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 10.5.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 10.6. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" Note If you want to assign a custom role with all the required permissions to the identity, run the following command: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role <custom_role> \ 1 --scope "USD{RESOURCE_GROUP_ID}" 1 Specifies the custom role name. 10.7. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>."rhel-coreos-extensions"."azure-disk".url'` where: <architecture> Specifies the architecture, valid values include x86_64 or aarch64 . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 10.8. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 10.9. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 10.9.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 10.22. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 10.10. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters storageAccount="USD{CLUSTER_NAME}sa" \ 3 --parameters architecture="<architecture>" 4 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of your Azure storage account. 4 Specify the system architecture. Valid values are x64 (default) or Arm64 . 10.10.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 10.23. 02_storage.json ARM template { "USDschema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "architecture": { "type": "string", "metadata": { "description": "The architecture of the Virtual Machines" }, "defaultValue": "x64", "allowedValues": [ "Arm64", "x64" ] }, "baseName": { "type": "string", "minLength": 1, "metadata": { "description": "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "storageAccount": { "type": "string", "metadata": { "description": "The Storage Account name" } }, "vhdBlobURL": { "type": "string", "metadata": { "description": "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables": { "location": "[resourceGroup().location]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName": "[parameters('baseName')]", "imageNameGen2": "[concat(parameters('baseName'), '-gen2')]", "imageRelease": "1.0.0" }, "resources": [ { "apiVersion": "2021-10-01", "type": "Microsoft.Compute/galleries", "name": "[variables('galleryName')]", "location": "[variables('location')]", "resources": [ { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageName')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V1", "identifier": { "offer": "rhcos", "publisher": "RedHat", "sku": "basic" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageName')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] }, { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V2", "identifier": { "offer": "rhcos-gen2", "publisher": "RedHat-gen2", "sku": "gen2" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageNameGen2')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] } ] } ] } 10.11. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 10.11.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 10.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 10.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 10.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 10.12. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 10.12.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 10.24. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip-v4", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "[variables('masterLoadBalancerName')]" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 10.13. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 10.13.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 10.25. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 10.14. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 10.14.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 10.26. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 10.15. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 10.16. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 10.16.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 10.27. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 10.17. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.20. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 10.21. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.22. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"413.92.2023101700\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-restricted-networks-azure-user-provisioned |
Chapter 20. Granting sudo access to an IdM user on an IdM client | Chapter 20. Granting sudo access to an IdM user on an IdM client Learn more about granting sudo access to users in Identity Management. 20.1. Sudo access on an IdM client System administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. Consequently, when users need to perform an administrative command normally reserved for the root user, they precede that command with sudo . After entering their password, the command is executed as if they were the root user. To execute a sudo command as another user or group, such as a database service account, you can configure a RunAs alias for a sudo rule. If a Red Hat Enterprise Linux (RHEL) 8 host is enrolled as an Identity Management (IdM) client, you can specify sudo rules defining which IdM users can perform which commands on the host in the following ways: Locally in the /etc/sudoers file Centrally in IdM You can create a central sudo rule for an IdM client using the command line (CLI) and the IdM Web UI. In RHEL 8.4 and later, you can also configure password-less authentication for sudo using the Generic Security Service Application Programming Interface (GSSAPI), the native way for UNIX-based operating systems to access and authenticate Kerberos services. You can use the pam_sss_gss.so Pluggable Authentication Module (PAM) to invoke GSSAPI authentication via the SSSD service, allowing users to authenticate to the sudo command with a valid Kerberos ticket. Additional resources Managing sudo access 20.2. Granting sudo access to an IdM user on an IdM client using the CLI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. For example, complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named idm_user_reboot : Add the /usr/sbin/reboot command to the idm_user_reboot rule: Apply the idm_user_reboot rule to the IdM idmclient host: Add the idm_user account to the idm_user_reboot rule: Optional: Define the validity of the idm_user_reboot rule: To define the time at which a sudo rule starts to be valid, use the ipa sudorule-mod sudo_rule_name command with the --setattr sudonotbefore= DATE option. The DATE value must follow the yyyymmddHHMMSSZ format, with seconds specified explicitly. For example, to set the start of the validity of the idm_user_reboot rule to 31 December 2025 12:34:00, enter: To define the time at which a sudo rule stops being valid, use the --setattr sudonotafter=DATE option. For example, to set the end of the idm_user_reboot rule validity to 31 December 2026 12:34:00, enter: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo . Enter the password for idm_user when prompted: 20.3. Granting sudo access to an AD user on an IdM client using the CLI Identity Management (IdM) system administrators can use IdM user groups to set access permissions, host-based access control, sudo rules, and other controls on IdM users. IdM user groups grant and restrict access to IdM domain resources. You can add both Active Directory (AD) users and AD groups to IdM user groups. To do that: Add the AD users or groups to a non-POSIX external IdM group. Add the non-POSIX external IdM group to an IdM POSIX group. You can then manage the privileges of the AD users by managing the privileges of the POSIX group. For example, you can grant sudo access for a specific command to an IdM POSIX user group on a specific IdM host. Note It is also possible to add AD user groups as members to IdM external groups. This might make it easier to define policies for Windows users, by keeping the user and group management within the single AD realm. Important Do not use ID overrides of AD users for SUDO rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. You can add ID overrides as group members. However, you can only use this functionality to manage IdM resources in the IdM API. The possibility to add ID overrides as group members is not extended to POSIX environments and you therefore cannot use it for membership in sudo or host-based access control (HBAC) rules. Follow this procedure to create the ad_users_reboot sudo rule to grant the [email protected] AD user the permission to run the /usr/sbin/reboot command on the idmclient IdM host, which is normally reserved for the root user. [email protected] is a member of the ad_users_external non-POSIX group, which is, in turn, a member of the ad_users POSIX group. Prerequisites You have obtained the IdM admin Kerberos ticket-granting ticket (TGT). A cross-forest trust exists between the IdM domain and the ad-domain.com AD domain. No local administrator account is present on the idmclient host: the administrator user is not listed in the local /etc/passwd file. Procedure Create the ad_users group that contains the ad_users_external group with the administrator@ad-domain member: Optional: Create or select a corresponding group in the AD domain to use to manage AD users in the IdM realm. You can use multiple AD groups and add them to different groups on the IdM side. Create the ad_users_external group and indicate that it contains members from outside the IdM domain by adding the --external option: Note Ensure that the external group that you specify here is an AD security group with a global or universal group scope as defined in the Active Directory security groups document. For example, the Domain users or Domain admins AD security groups cannot be used because their group scope is domain local . Create the ad_users group: Add the [email protected] AD user to ad_users_external as an external member: The AD user must be identified by a fully-qualified name, such as DOMAIN\user_name or user_name@DOMAIN . The AD identity is then mapped to the AD SID for the user. The same applies to adding AD groups. Add ad_users_external to ad_users as a member: Grant the members of ad_users the permission to run /usr/sbin/reboot on the idmclient host: Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named ad_users_reboot : Add the /usr/sbin/reboot command to the ad_users_reboot rule: Apply the ad_users_reboot rule to the IdM idmclient host: Add the ad_users group to the ad_users_reboot rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as [email protected] , an indirect member of the ad_users group: Optional: Display the sudo commands that [email protected] is allowed to execute: Reboot the machine using sudo . Enter the password for [email protected] when prompted: Additional resources Active Directory users and Identity Management groups Include users and groups from a trusted Active Directory domain into SUDO rules 20.4. Granting sudo access to an IdM user on an IdM client using the IdM Web UI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. Complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the command line, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Add the /usr/sbin/reboot command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command you want the user to be able to perform using sudo : /usr/sbin/reboot . Figure 20.1. Adding IdM sudo command Click Add . Use the new sudo command entry to create a sudo rule to allow idm_user to reboot the idmclient machine: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: idm_user_reboot . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "idm_user_reboot" dialog box. In the Add users into sudo rule "idm_user_reboot" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "idm_user_reboot" dialog box. In the Add hosts into sudo rule "idm_user_reboot" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box. In the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box in the Available column, check the /usr/sbin/reboot checkbox, and move it to the Prospective column. Click Add to return to the idm_sudo_reboot page. Figure 20.2. Adding IdM sudo rule Click Save in the top left corner. The new rule is enabled by default. Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If the sudo rule is configured correctly, the machine reboots. 20.5. Creating a sudo rule on the CLI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule on the command line called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Create a sudo rule named run_third-party-app_report : Use the --users= <user> option to specify the RunAs user for the sudorule-add-runasuser command: The user (or group specified with the --groups=* option) can be external to IdM, such as a local service account or an Active Directory user. Do not add a % prefix for group names. Add the /opt/third-party-app/bin/report command to the run_third-party-app_report rule: Apply the run_third-party-app_report rule to the IdM idmclient host: Add the idm_user account to the run_third-party-app_report rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 20.6. Creating a sudo rule in the IdM WebUI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule in the IdM WebUI called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command: /opt/third-party-app/bin/report . Click Add . Use the new sudo command entry to create the new sudo rule: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: run_third-party-app_report . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "run_third-party-app_report" dialog box. In the Add users into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "run_third-party-app_report" dialog box. In the Add hosts into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box. In the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box in the Available column, check the /opt/third-party-app/bin/report checkbox, and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Specify the RunAs user: In the As Whom section, check the Specified Users and Groups radio button. In the RunAs Users subsection, click Add to open the Add RunAs users into sudo rule "run_third-party-app_report" dialog box. In the Add RunAs users into sudo rule "run_third-party-app_report" dialog box, enter the thirdpartyapp service account in the External box and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Click Save in the top left corner. The new rule is enabled by default. Figure 20.3. Details of the sudo rule Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 20.7. Enabling GSSAPI authentication for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. With this configuration, IdM users can authenticate to the sudo command with their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. The idmclient host is running RHEL 8.4 or later. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entry to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 8.8 or later: Optional: Determine if you have selected the sssd authselect profile: If the sssd authselect profile is selected, enable GSSAPI authentication: If the sssd authselect profile is not selected, select it and enable GSSAPI authentication: On RHEL 8.7 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Verification Log into the host as the idm_user account. Verify that you have a ticket-granting ticket as the idm_user account. Optional: If you do not have Kerberos credentials for the idm_user account, delete your current Kerberos credentials and request the correct ones. Reboot the machine using sudo , without specifying a password. Additional resources The GSSAPI entry in the IdM terminology listing Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI pam_sss_gss (8) and sssd.conf (5) man pages on your system 20.8. Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. Additionally, only users who have logged in with a smart card will authenticate to those commands with their Kerberos ticket. Note You can use this procedure as a template to configure GSSAPI authentication with SSSD for other PAM-aware services, and further restrict access to only those users that have a specific authentication indicator attached to their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You have configured smart card authentication for the idmclient host. The idmclient host is running RHEL 8.4 or later. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entries to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 8.8 or later: Determine if you have selected the sssd authselect profile: Optional: Select the sssd authselect profile: Enable GSSAPI authentication: Configure the system to authenticate only users with smart cards: On RHEL 8.7 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Open the /etc/pam.d/sudo-i PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo-i file. Save and close the /etc/pam.d/sudo-i file. Verification Log into the host as the idm_user account and authenticate with a smart card. Verify that you have a ticket-granting ticket as the smart card user. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo , without specifying a password. Additional resources SSSD options controlling GSSAPI authentication for PAM services The GSSAPI entry in the IdM terminology listing Configuring Identity Management for smart card authentication Kerberos authentication indicators Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI . pam_sss_gss (8) and sssd.conf (5) man pages on your system 20.9. SSSD options controlling GSSAPI authentication for PAM services You can use the following options for the /etc/sssd/sssd.conf configuration file to adjust the GSSAPI configuration within the SSSD service. pam_gssapi_services GSSAPI authentication with SSSD is disabled by default. You can use this option to specify a comma-separated list of PAM services that are allowed to try GSSAPI authentication using the pam_sss_gss.so PAM module. To explicitly disable GSSAPI authentication, set this option to - . pam_gssapi_indicators_map This option only applies to Identity Management (IdM) domains. Use this option to list Kerberos authentication indicators that are required to grant PAM access to a service. Pairs must be in the format <PAM_service> :_<required_authentication_indicator>_ . Valid authentication indicators are: otp for two-factor authentication radius for RADIUS authentication pkinit for PKINIT, smart card, or certificate authentication hardened for hardened passwords pam_gssapi_check_upn This option is enabled and set to true by default. If this option is enabled, the SSSD service requires that the user name matches the Kerberos credentials. If false , the pam_sss_gss.so PAM module authenticates every user that is able to obtain the required service ticket. Examples The following options enable Kerberos authentication for the sudo and sudo-i services, requires that sudo users authenticated with a one-time password, and user names must match the Kerberos principal. Because these settings are in the [pam] section, they apply to all domains: You can also set these options in individual [domain] sections to overwrite any global values in the [pam] section. The following options apply different GSSAPI settings to each domain: For the idm.example.com domain Enable GSSAPI authentication for the sudo and sudo -i services. Require certificate or smart card authentication authenticators for the sudo command. Require one-time password authentication authenticators for the sudo -i command. Enforce matching user names and Kerberos principals. For the ad.example.com domain Enable GSSAPI authentication only for the sudo service. Do not enforce matching user names and principals. Additional resources Kerberos authentication indicators 20.10. Troubleshooting GSSAPI authentication for sudo If you are unable to authenticate to the sudo service with a Kerberos ticket from IdM, use the following scenarios to troubleshoot your configuration. Prerequisites You have enabled GSSAPI authentication for the sudo service. See Enabling GSSAPI authentication for sudo on an IdM client . You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure If you see the following error, the Kerberos service might not able to resolve the correct realm for the service ticket based on the host name: In this situation, add the hostname directly to [domain_realm] section in the /etc/krb5.conf Kerberos configuration file: If you see the following error, you do not have any Kerberos credentials: In this situation, retrieve Kerberos credentials with the kinit utility or authenticate with SSSD: If you see either of the following errors in the /var/log/sssd/sssd_pam.log log file, the Kerberos credentials do not match the username of the user currently logged in: In this situation, verify that you authenticated with SSSD, or consider disabling the pam_gssapi_check_upn option in the /etc/sssd/sssd.conf file: For additional troubleshooting, you can enable debugging output for the pam_sss_gss.so PAM module. Add the debug option at the end of all pam_sss_gss.so entries in PAM files, such as /etc/pam.d/sudo and /etc/pam.d/sudo-i : Try to authenticate with the pam_sss_gss.so module and review the console output. In this example, the user did not have any Kerberos credentials. 20.11. Using an Ansible playbook to ensure sudo access for an IdM user on an IdM client In Identity Management (IdM), you can ensure sudo access to a specific command is granted to an IdM user account on a specific IdM host. Complete this procedure to ensure a sudo rule named idm_user_reboot exists. The rule grants idm_user the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have ensured the presence of a user account for idm_user in IdM and unlocked the account by creating a password for the user . For details on adding a new IdM user using the command line, see link: Adding users using the command line . No local idm_user account exists on idmclient . The idm_user user is not listed in the /etc/passwd file on idmclient . Procedure Create an inventory file, for example inventory.file , and define ipaservers in it: Add one or more sudo commands: Create an ensure-reboot-sudocmd-is-present.yml Ansible playbook that ensures the presence of the /usr/sbin/reboot command in the IdM database of sudo commands. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudocmd/ensure-sudocmd-is-present.yml file: Run the playbook: Create a sudo rule that references the commands: Create an ensure-sudorule-for-idmuser-on-idmclient-is-present.yml Ansible playbook that uses the sudo command entry to ensure the presence of a sudo rule. The sudo rule allows idm_user to reboot the idmclient machine. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudorule/ensure-sudorule-is-present.yml file: Run the playbook: Verification Test that the sudo rule whose presence you have ensured on the IdM server works on idmclient by verifying that idm_user can reboot idmclient using sudo . Note that it can take a few minutes for the changes made on the server to take effect on the client. Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If sudo is configured correctly, the machine reboots. Additional resources See the README-sudocmd.md , README-sudocmdgroup.md , and README-sudorule.md files in the /usr/share/doc/ansible-freeipa/ directory. | [
"kinit admin",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE",
"ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z",
"ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:",
"ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map",
"ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004",
"ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True",
"ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ssh [email protected]@ipaclient Password:",
"[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"kinit admin",
"ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report",
"ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE",
"ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect enable-feature with-gssapi",
"authselect select sssd with-gssapi",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"ssh -l [email protected] localhost [email protected]'s password:",
"[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44",
"[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect select sssd",
"authselect enable-feature with-gssapi",
"authselect with-smartcard-required",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"ssh -l [email protected] localhost PIN for smart_card",
"[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true",
"[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false",
"Server not found in Kerberos database",
"[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM",
"No Kerberos credentials available",
"[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :",
"User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].",
"[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false",
"cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth",
"cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error",
"[ipaservers] server.idm.example.com",
"--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present",
"ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml",
"sudo /usr/sbin/reboot [sudo] password for idm_user:"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/granting-sudo-access-to-an-IdM-user-on-an-IdM-client_using-ansible-to-install-and-manage-idm |
Chapter 1. Postinstallation configuration overview | Chapter 1. Postinstallation configuration overview After installing OpenShift Container Platform, a cluster administrator can configure and customize the following components: Machine Bare metal Cluster Node Network Storage Users Alerts and notifications 1.1. Configuration tasks to perform after installation Cluster administrators can perform the following postinstallation configuration tasks: Configure operating system features : Machine Config Operator (MCO) manages MachineConfig objects. By using MCO, you can perform the following tasks on an OpenShift Container Platform cluster: Configure nodes by using MachineConfig objects Configure MCO-related custom resources Configure bare metal nodes : The Bare Metal Operator (BMO) implements a Kubernetes API for managing bare metal hosts. It maintains an inventory of available bare metal hosts as instances of the BareMetalHost Custom Resource Definition (CRD). The Bare Metal Operator can: Inspect the host's hardware details and report them on the corresponding BareMetalHost. This includes information about CPUs, RAM, disks, NICs, and more. Inspect the host's firmware and configure BIOS settings. Provision hosts with a desired image. Clean a host's disk contents before or after provisioning. Configure cluster features : As a cluster administrator, you can modify the configuration resources of the major features of an OpenShift Container Platform cluster. These features include: Image registry Networking configuration Image build behavior Identity provider The etcd configuration Machine set creation to handle the workloads Cloud provider credential management Configure cluster components to be private : By default, the installation program provisions OpenShift Container Platform by using a publicly accessible DNS and endpoints. If you want your cluster to be accessible only from within an internal network, configure the following components to be private: DNS Ingress Controller API server Perform node operations : By default, OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS) compute machines. As a cluster administrator, you can perform the following operations with the machines in your OpenShift Container Platform cluster: Add and remove compute machines Add and remove taints and tolerations to the nodes Configure the maximum number of pods per node Enable Device Manager Configure network : After installing OpenShift Container Platform, you can configure the following: Ingress cluster traffic Node port service range Network policy Enabling the cluster-wide proxy Configure storage : By default, containers operate using ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. TO store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning : You can dynamically provision storage on demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning : You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. Configure users : OAuth access tokens allow users to authenticate themselves to the API. As a cluster administrator, you can configure OAuth to perform the following tasks: Specify an identity provider Use role-based access control to define and supply permissions to users Install an Operator from OperatorHub Manage alerts and notifications : By default, firing alerts are displayed on the Alerting UI of the web console. You can also configure OpenShift Container Platform to send alert notifications to external systems. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/post-install-configuration-overview |
4.124. ksh | 4.124. ksh 4.124.1. RHBA-2012:1428 - ksh bug fix update An updated ksh package that fixes one bug is now available for Red Hat Enterprise Linux 6 Extended Update Support. KSH-93 is the most recent version of the KornShell by David Korn of AT&T Bell Laboratories. KornShell is a shell programming language which is also compatible with sh, the original Bourne Shell. Bug Fix BZ# 863947 Previously, ksh did not allocate the correct amount of memory for its data structures containing information about file descriptors. When running a task that used file descriptors extensively, ksh terminated unexpectedly with a segmentation fault. With this update, the proper amount of memory is allocated and ksh no longer crashes if file descriptors are used extensively. All users of ksh are advised to upgrade to this updated package, which fixes this bug. 4.124.2. RHBA-2011:1647 - ksh bug fix update An updated ksh package that fixes various bugs is now available for Red Hat Enterprise Linux 6. KSH-93 is the most recent version of the KornShell by David Korn of AT&T Bell Laboratories. KornShell is a shell programming language which is also compatible with sh, the original Bourne Shell. Bug Fixes BZ# 702016 Previously, ksh did not always wait for a pipeline to complete when the pipefail option was used. Consequently, a failed exit status was erroneously reported even when the pipeline had not failed. With this update, the code has been improved and the pipefail option now functions as expected. BZ# 702013 , BZ# 728900 When running a ksh script the exit code of a child process was not preserved. Consequently, when a script asked for such an exit code, the wrong value was reported. With this update, an upstream patch has been applied which fixes the problem. BZ# 702015 File name completion used after an environment variable failed and ksh reported a "bad substitution" error. With this update, an upstream patch has been applied which fixes the problem. BZ# 702011 In POSIX functions, a function defined without using the, "function", keyword, the value of the variable "USD0" was changed to the name of the function instead of keeping the original value, the name of the caller function. With this update an upstream patch has been applied to correct the code and ksh keeps the name of the caller function in "USD0" as expected. BZ# 701890 Previously, when the ksh built-in "kill" command was called with a very large, non-existent PID value, it was interpreted as " -1". The "-1" argument to the kill command is for terminating all processes. Consequently, all processes owned by the user were killed. With this update a patch has been applied and ksh now checks for a valid process ID. BZ##683734 If the IFS variable was unset inside a function used in a script, the memory being used was erroneously freed. Consequently, ksh would terminate unexpectedly. With this update, an upstream patch has been applied which still allows the IFS variable to be unset, but no longer frees the memory. Thus the problem is fixed, and ksh no longer crashes in the scenario described. BZ# 702014 Previously, ksh treated an array declaration as a definition. Consequently, the array contained one element after the declaration. This bug has been fixed, and now an array is correctly reported as empty after a declaration. BZ# 742244 Previously, when using ksh, ksh became unresponsive when pipes were used in a "eval" argument. With this update an upstream patch has been applied and the ksh no longer hangs in the scenario described. BZ# 743842 ksh could return the exit code of the process to have used the same PID number, when PID numbers were being reused after many hundreds of iterations of a script. With this update the code has been fixed and the error no longer occurs in the scenario described. All users of ksh are advised to upgrade to this updated package, which fixes these bugs. 4.124.3. RHBA-2012:0004 - ksh bug fix update An updated ksh package that fixes one bug is now available for Red Hat Enterprise Linux 6. KSH-93 is the most recent version of the KornShell by David Korn of AT&T Bell Laboratories. KornShell is a shell programming language which is also compatible with sh, the original Bourne Shell. Bug Fix BZ# 768917 When exiting a subshell after a command substitution, ksh could prematurely exit without any error. With this update, ksh no longer terminates under these circumstances and all subsequent commands are processed correctly. All users of ksh are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/ksh |
3.4. Listing Logical Networks | 3.4. Listing Logical Networks This Ruby example lists the logical networks. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of networks: nws_service = system_service.networks_service # Retrieve the list of clusters and for each one # print its name: nws = nws_service.list nws.each do |nw| puts nw.name end In an environment with only the default management network, the example outputs: For more information, see NetworksService:list . | [
"Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of networks: nws_service = system_service.networks_service Retrieve the list of clusters and for each one print its name: nws = nws_service.list nws.each do |nw| puts nw.name end",
"ovirtmgmt"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/listing_logical_networks |
Chapter 4. Uninstalling OpenShift Data Foundation | Chapter 4. Uninstalling OpenShift Data Foundation 4.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/uninstalling_openshift_data_foundation |
Chapter 3. Migrating from an embedded PostgreSQL 10 database to an external PostgreSQL 10 database | Chapter 3. Migrating from an embedded PostgreSQL 10 database to an external PostgreSQL 10 database Important Before scaling system.appSpec.replicas to 1 , the database should be upgraded to the supported version, which is currently PostgreSQL 13. See Red Hat 3scale API Management Supported Configurations This documentation is about migrating from an embedded PostgreSQL 10 database to an external PostgreSQL 10 database. To upgrade from an external PostgreSQL 10 database to an external PostgreSQL 13 database, you must following the official PostgreSQL documentation . Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. The process to move from an embedded PostgreSQL database to and external PostgreSQL database should happen with the same DB version. In this migration guide, it should be PostgreSQL 10. You should use external databases for production environments. If you are using PostgreSQL as your system-database , use the supported version for external database installation with 3scale. Important These steps are general guidelines. Exact steps may vary depending on your operating system, version of PostgreSQL, and specific requirements of your database. Read the PostgreSQL documentation and release notes carefully before upgrading. Test this procedure in a non-production environment before applying it to a production deployment. This process disrupts the provision of the service until the procedure finishes. Due to this disruption, be sure to have a maintenance window. Procedure Use APIManager customer resource (CR) to scale down the system-app DeploymentConfig (DC): apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: <apimanager_sample> spec: system: appSpec: replicas: 0 wildcardDomain: <example.com> Verify that the pods are scaled down: USD oc get deploymentconfig system-app -o jsonpath='{.status.availableReplicas}{"\n"}' 0 Wait for all the 3scale pods to have the status of Terminated before proceeding with the PostgreSQL migration. Make a backup of the existing PostgreSQL database, including all data, configurations, and user accounts: USD DB_USER=USD(oc get secret system-database -o jsonpath="{.data.DB_USER}" | base64 --decode) USD DATABASE_NAME=USD(oc get secret system-database -o jsonpath="{.data.URL}" | base64 --decode | cut -d '/' -f4) Important Do not pipe to stdout . Binary files get corrupted. Dump with custom format: USD oc rsh USD(oc get pods -l 'deploymentConfig=system-postgresql' -o json | jq -r '.items[0].metadata.name') bash -c "pg_dump -U USDDB_USER -F c USDDATABASE_NAME -f /tmp/<backupfilename>.backup" Download the backup: USD oc cp USD(oc get pods -l 'deploymentConfig=system-postgresql' -o json | jq -r '.items[0].metadata.name'):/tmp/<backupfilename>.backup <backupfilename>.backup Install the same version of PostgreSQL 10 that you deployed on 3scale in your target external system. Download the installation package from the PostgreSQL website following the installation instructions. Copy and restore the backup you made of the existing PostgreSQL database, including all data, configurations, and user accounts to the target external system. Create a new database in PostgreSQL: USD createdb -U <username> <databasename> Import the data from the backup file into the new PostgreSQL database. Restore with custom format: USD pg_restore [--host <databasehostname>] -U <username> -d <databasename> --verbose -F c <backupfilename>.backup Verify that the data was successfully imported into the new PostgreSQL database by connecting to the database and running queries: postgresql://<username>:<password>@<databasehostname>/<databasename> Update system-database secret: USD oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: system-database stringData: DB_PASSWORD: <password> DB_USER: <username> URL: "postgresql://<username>:<password>@<databasehostname>:<databaseport>/<databasename>" type: Opaque EOF Update APImanager CR to enable external database and scale up system: USD oc patch apimanager <apimanager_sample> --type=merge --patch '{"spec": {"system": {"database": null, "appSpec": {"replicas": 1}}, "externalComponents": {"system": {"database": true}}}}' Remove local postgresql deployment: USD oc delete service system-postgresql USD oc delete deploymentconfig system-postgresql USD oc delete pvc postgresql-data Verify that the pods are scaled up: USD oc wait --for=condition=available apimanager/<apimanager_sample> --timeout=-1s Additional resources PostgreSQL documentation PostgreSQL Downloads | [
"apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: <apimanager_sample> spec: system: appSpec: replicas: 0 wildcardDomain: <example.com>",
"oc get deploymentconfig system-app -o jsonpath='{.status.availableReplicas}{\"\\n\"}' 0",
"DB_USER=USD(oc get secret system-database -o jsonpath=\"{.data.DB_USER}\" | base64 --decode) DATABASE_NAME=USD(oc get secret system-database -o jsonpath=\"{.data.URL}\" | base64 --decode | cut -d '/' -f4)",
"oc rsh USD(oc get pods -l 'deploymentConfig=system-postgresql' -o json | jq -r '.items[0].metadata.name') bash -c \"pg_dump -U USDDB_USER -F c USDDATABASE_NAME -f /tmp/<backupfilename>.backup\"",
"oc cp USD(oc get pods -l 'deploymentConfig=system-postgresql' -o json | jq -r '.items[0].metadata.name'):/tmp/<backupfilename>.backup <backupfilename>.backup",
"createdb -U <username> <databasename>",
"pg_restore [--host <databasehostname>] -U <username> -d <databasename> --verbose -F c <backupfilename>.backup",
"postgresql://<username>:<password>@<databasehostname>/<databasename>",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: system-database stringData: DB_PASSWORD: <password> DB_USER: <username> URL: \"postgresql://<username>:<password>@<databasehostname>:<databaseport>/<databasename>\" type: Opaque EOF",
"oc patch apimanager <apimanager_sample> --type=merge --patch '{\"spec\": {\"system\": {\"database\": null, \"appSpec\": {\"replicas\": 1}}, \"externalComponents\": {\"system\": {\"database\": true}}}}'",
"oc delete service system-postgresql oc delete deploymentconfig system-postgresql oc delete pvc postgresql-data",
"oc wait --for=condition=available apimanager/<apimanager_sample> --timeout=-1s"
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/migrating_red_hat_3scale_api_management/migrating-embedded-postgresql-to-external-postgresql |
probe::signal.syskill | probe::signal.syskill Name probe::signal.syskill - Sending kill signal to a process Synopsis Values name Name of the probe point sig_name A string representation of the signal sig The specific signal sent to the process pid_name The name of the signal recipient sig_pid The PID of the process receiving the signal | [
"signal.syskill"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-syskill |
Deploying OpenShift Data Foundation on any platform | Deploying OpenShift Data Foundation on any platform Red Hat OpenShift Data Foundation 4.16 Instructions on deploying OpenShift Data Foundation on any platform including virtualized and cloud environments. Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on any platform. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_any_platform/index |
Chapter 5. KVM Paravirtualized (virtio) Drivers | Chapter 5. KVM Paravirtualized (virtio) Drivers Paravirtualized drivers enhance the performance of guests, decreasing guest I/O latency and increasing throughput almost to bare-metal levels. It is recommended to use the paravirtualized drivers for fully virtualized guests running I/O-heavy tasks and applications. Virtio drivers are KVM's paravirtualized device drivers, available for guest virtual machines running on KVM hosts. These drivers are included in the virtio package. The virtio package supports block (storage) devices and network interface controllers. Note PCI devices are limited by the virtualized system architecture. See Chapter 16, Guest Virtual Machine Device Configuration for additional limitations when using assigned devices. 5.1. Using KVM virtio Drivers for Existing Storage Devices You can modify an existing hard disk device attached to the guest to use the virtio driver instead of the virtualized IDE driver. The example shown in this section edits libvirt configuration files. Note that the guest virtual machine does not need to be shut down to perform these steps, however the change will not be applied until the guest is completely shut down and rebooted. Procedure 5.1. Using KVM virtio drivers for existing devices Ensure that you have installed the appropriate driver ( viostor ), before continuing with this procedure. Run the virsh edit guestname command as root to edit the XML configuration file for your device. For example, virsh edit guest1 . The configuration files are located in the /etc/libvirt/qemu/ directory. Below is a file-based block device using the virtualized IDE driver. This is a typical entry for a virtual machine not using the virtio drivers. Change the entry to use the virtio device by modifying the bus= entry to virtio . Note that if the disk was previously IDE, it has a target similar to hda , hdb , or hdc . When changing to bus=virtio the target needs to be changed to vda , vdb , or vdc accordingly. Remove the address tag inside the disk tags. This must be done for this procedure to work. Libvirt will regenerate the address tag appropriately the time the virtual machine is started. Alternatively, virt-manager , virsh attach-disk or virsh attach-interface can add a new device using the virtio drivers. See the libvirt website for more details on using Virtio: http://www.linux-kvm.org/page/Virtio | [
"<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='hda' bus='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>",
"<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-kvm_para_virtualized_virtio_drivers |
Chapter 10. Configuring Attribute Encryption | Chapter 10. Configuring Attribute Encryption The Directory Server offers a number of mechanisms to secure access to sensitive data, such as access control rules to prevent unauthorized users from reading certain entries or attributes within entries and TLS to protect data from eavesdropping and tampering on untrusted networks. However, if a copy of the server's database files should fall into the hands of an unauthorized person, they could potentially extract sensitive information from those files. Because information in a database is stored in plain text, some sensitive information, such as government identification numbers or passwords, may not be protected enough by standard access control measures. For highly sensitive information, this potential for information loss could present a significant security risk. In order to remove that security risk, Directory Server allows portions of its database to be encrypted. Once encrypted, the data are safe even in the event that an attacker has a copy of the server's database files. Database encryption allows attributes to be encrypted in the database. Both encryption and the encryption cipher are configurable per attribute per back end. When configured, every instance of a particular attribute, even index data, is encrypted for every entry stored in that database. An additional benefit of attribute encryption is, that encrypted values can only be sent to a clients with a Security Strength Factor (SSF) greater than 1. Note There is one exception to encrypted data: any value which is used as the RDN for an entry is not encrypted within the entry DN. For example, if the uid attribute is encrypted, the value is encrypted in the entry but is displayed in the DN: That would allow someone to discover the encrypted value. Any attribute used within the entry DN cannot be effectively encrypted, since it will always be displayed in the DN. Be aware of what attributes are used to build the DN and design the attribute encryption model accordingly. Indexed attributes may be encrypted, and attribute encryption is fully compatible with eq and pres indexing. The contents of the index files that are normally derived from attribute values are also encrypted to prevent an attacker from recovering part or all of the encrypted data from an analysis of the indexes. Since the server pre-encrypts all index keys before looking up an index for an encrypted attribute, there is some effect on server performance for searches that make use of an encrypted index, but the effect is not serious enough that it is no longer worthwhile to use an index. 10.1. Encryption Keys In order to use attribute encryption, the server must be configured for TLS and have TLS enabled because attribute encryption uses the server's TLS encryption key and the same PIN input methods as TLS. The PIN must either be entered manually upon server startup or a PIN file must be used. Randomly generated symmetric cipher keys are used to encrypt and decrypt attribute data. A separate key is used for each configured cipher. These keys are wrapped using the public key from the server's TLS certificate, and the resulting wrapped key is stored within the server's configuration files. The effective strength of the attribute encryption is never higher than the strength of the server's TLS key used for wrapping. Without access to the server's private key, it is not possible to recover the symmetric keys from the wrapped copies. Warning There is no mechanism for recovering a lost key. Therefore, it is especially important to back up the server's certificate database safely. If the server's certificate were lost, it would not be possible to decrypt any encrypted data stored in its database. Warning If the TLS certificate is expiring and needs to be renewed, export the encrypted back end instance before the renewal. Update the certificate, then reimport the exported LDIF file. | [
"dn: uid=jsmith1234 ,ou=People,dc=example,dc=com uid:: Sf04P9nJWGU1qiW9JJCGRg=="
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/creating_and_maintaining_databases-database_encryption |
Chapter 2. Configuring acceptors and connectors in network connections | Chapter 2. Configuring acceptors and connectors in network connections There are two types of connections used in AMQ Broker: network connections and in-VM connections. Network connections are used when the two parties are located in different virtual machines, whether on the same server or physically remote. An in-VM connection is used when the client, whether an application or a server, resides on the same virtual machine as the broker. Network connections use Netty . Netty is a high-performance, low-level network library that enables network connections to be configured in several different ways; using Java IO or NIO, TCP sockets, SSL/TLS, or tunneling over HTTP or HTTPS. Netty also allows for a single port to be used for all messaging protocols. A broker will automatically detect which protocol is being used and direct the incoming message to the appropriate handler for further processing. The URI of a network connection determines its type. For example, specifying vm in the URI creates an in-VM connection: <acceptor name="in-vm-example">vm://0</acceptor> Alternatively, specifying tcp in the URI creates a network connection. For example: <acceptor name="network-example">tcp://localhost:61617</acceptor> The sections that follow describe two important configuration elements that are required for network connections and in-VM connections; acceptors and connectors . These sections show how to configure acceptors and connectors for TCP, HTTP, and SSL/TLS network connections, as well as in-VM connections. 2.1. About acceptors Acceptors define how connections are made to the broker. Each acceptor defines the port and protocols that a client can use to make a connection. A simple acceptor configuration is shown below. <acceptors> <acceptor name="example-acceptor">tcp://localhost:61617</acceptor> </acceptors> Each acceptor element that you define in the broker configuration is contained within a single acceptors element. There is no upper limit to the number of acceptors that you can define for a broker. By default, AMQ Broker includes an acceptor for each supported messaging protocol, as shown below: <configuration ...> <core ...> ... <acceptors> ... <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> ... </core> </configuration> 2.2. Configuring acceptors The following example shows how to configure an acceptor. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the acceptors element, add a new acceptor element. Specify a protocol, and port on the broker. For example: <acceptors> <acceptor name="example-acceptor">tcp://localhost:61617</acceptor> </acceptors> The preceding example defines an acceptor for the TCP protocol. The broker listens on port 61617 for client connections that are using TCP. Append key-value pairs to the URI defined for the acceptor. Use a semicolon ( ; ) to separate multiple key-value pairs. For example: <acceptor name="example-acceptor">tcp://localhost:61617?sslEnabled=true;key-store-path= </path/to/key_store> </acceptor> The configuration now defines an acceptor that uses TLS/SSL and defines the path to the required key store. Additional resources For details on the available configuration options for acceptors and connectors, see Appendix A, Acceptor and Connector Configuration Parameters . 2.3. About connectors While acceptors define how a broker accepts connections, connectors are used by clients to define how they can connect to a broker. A connector is configured on a broker when the broker itself acts as a client. For example: When the broker is bridged to another broker When the broker takes part in a cluster A simple connector configuration is shown below. <connectors> <connector name="example-connector">tcp://localhost:61617</connector> </connectors> 2.4. Configuring connectors The following example shows how to configure a connector. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the connectors element, add a new connector element. Specify a protocol, and port on the broker. For example: <connectors> <connector name="example-connector">tcp://localhost:61617</connector> </connectors> The preceding example defines a connector for the TCP protocol. Clients can use the connector configuration to connect to the broker on port 61617 using the TCP protocol. The broker itself can also use this connector for outgoing connections. Append key-value pairs to the URI defined for the connector. Use a semicolon ( ; ) to separate multiple key-value pairs. For example: <connector name="example-connector">tcp://localhost:61616?tcpNoDelay=true</connector> The configuration now defines a connector that sets the value of the tcpNoDelay property to true . Setting the value of this property to true turns off Nagle's algorithm for the connection. Nagle's algorithm is an algorithm used to improve the efficiency of TCP connections by delaying transmission of small data packets and consolidating these into large packets. Additional resources For details on the available configuration options for acceptors and connectors, see Appendix A, Acceptor and Connector Configuration Parameters . To learn how to configure a broker connector in the AMQ Core Protocol JMS client, see Configuring a broker connector in the AMQ Core Protocol JMS documentation. 2.5. Configuring a TCP connection AMQ Broker uses Netty to provide basic, unencrypted, TCP-based connectivity that can be configured to use blocking Java IO or the newer, non-blocking Java NIO. Java NIO is preferred for better scalability with many concurrent connections. However, using the old IO can sometimes give you better latency than NIO when you are less worried about supporting many thousands of concurrent connections. If you are running connections across an untrusted network, you should be aware that a TCP network connection is unencrypted. You might want to consider using an SSL or HTTPS configuration to encrypt messages sent over this connection if security is a priority. See Section 5.1, "Securing connections" for more details. When using a TCP connection, all connections are initiated by the client. The broker does not initiate any connections to the client. This works well with firewall policies that force connections to be initiated from one direction. For TCP connections, the host and the port of the connector URI define the address used for the connection. The following example shows how to configure a TCP connection. Prerequisites You should be familiar with configuring acceptors and connectors. For more information, see: Section 2.2, "Configuring acceptors" Section 2.4, "Configuring connectors" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a new acceptor or modify an existing one. In the connection URI, specify tcp as the protocol. Include both an IP address or host name and a port on the broker. For example: <acceptors> <acceptor name="tcp-acceptor">tcp://10.10.10.1:61617</acceptor> ... </acceptors> Based on the preceding example, the broker accepts TCP communications from clients connecting to port 61617 at the IP address 10.10.10.1 . (Optional) You can configure a connector in a similar way. For example: <connectors> <connector name="tcp-connector">tcp://10.10.10.2:61617</connector> ... </connectors> The connector in the preceding example is referenced by a client, or even the broker itself, when making a TCP connection to the specified IP and port, 10.10.10.2:61617 . Additional resources For details on the available configuration options for TCP connections, see Appendix A, Acceptor and Connector Configuration Parameters . 2.6. Configuring an HTTP connection HTTP connections tunnel packets over the HTTP protocol and are useful in scenarios where firewalls allow only HTTP traffic. AMQ Broker automatically detects if HTTP is being used, so configuring a network connection for HTTP is the same as configuring a connection for TCP. Prerequisites You should be familiar with configuring acceptors and connectors. For more information, see: Section 2.2, "Configuring acceptors" Section 2.4, "Configuring connectors" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a new acceptor or modify an existing one. In the connection URI, specify tcp as the protocol. Include both an IP address or host name and a port on the broker. For example: <acceptors> <acceptor name="http-acceptor">tcp://10.10.10.1:80</acceptor> ... </acceptors> Based on the preceding example, the broker accepts HTTP communications from clients connecting to port 80 at the IP address 10.10.10.1 . The broker automatically detects that the HTTP protocol is in use and communicates with the client accordingly. (Optional) You can configure a connector in a similar way. For example: <connectors> <connector name="http-connector">tcp://10.10.10.2:80</connector> ... </connectors> Using the connector shown in the preceding example, a broker creates an outbound HTTP connection on port 80 at the IP address 10.10.10.2 . Additional resources An HTTP connection uses the same configuration parameters as TCP, but it also has some of its own. For details on all of the available configuration options for HTTP connections, see Appendix A, Acceptor and Connector Configuration Parameters . For a full working example that shows how to use HTTP, see the http-transport example that is located in the <install_dir> /examples/features/standard/ directory of your broker installation. 2.7. Configuring secure network connections You can secure network connections using TLS/SSL. For more information, see Section 5.1, "Securing connections" . 2.8. Configuring an in-VM connection You can use an in-VM connection when multiple brokers are co-located on the same virtual machine, for example, as part of a high availability (HA) configuration. In-VM connections can also be used by local clients running in the same JVM as the broker. Prerequisites You should be familiar with configuring acceptors and connectors. For more information, see: Section 2.2, "Configuring acceptors" Section 2.4, "Configuring connectors" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a new acceptor or modify an existing one. In the connection URI, specify vm as the protocol. For example: <acceptors> <acceptor name="in-vm-acceptor">vm://0</acceptor> ... </acceptors> Based on the acceptor in the preceding example, the broker accepts connections from a broker with an ID of 0 . The other broker must be running on the same virtual machine. (Optional) You can configure a connector in a similar way. For example: <connectors> <connector name="in-vm-connector">vm://0</connector> ... </connectors> The connector in the preceding example defines how a client can establish an in-VM connection to a broker with an ID of 0 that is running on the same virtual machine as the client. The client can be an application or another broker. | [
"<acceptor name=\"in-vm-example\">vm://0</acceptor>",
"<acceptor name=\"network-example\">tcp://localhost:61617</acceptor>",
"<acceptors> <acceptor name=\"example-acceptor\">tcp://localhost:61617</acceptor> </acceptors>",
"<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name=\"hornetq\">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name=\"mqtt\">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> </core> </configuration>",
"<acceptors> <acceptor name=\"example-acceptor\">tcp://localhost:61617</acceptor> </acceptors>",
"<acceptor name=\"example-acceptor\">tcp://localhost:61617?sslEnabled=true;key-store-path= </path/to/key_store> </acceptor>",
"<connectors> <connector name=\"example-connector\">tcp://localhost:61617</connector> </connectors>",
"<connectors> <connector name=\"example-connector\">tcp://localhost:61617</connector> </connectors>",
"<connector name=\"example-connector\">tcp://localhost:61616?tcpNoDelay=true</connector>",
"<acceptors> <acceptor name=\"tcp-acceptor\">tcp://10.10.10.1:61617</acceptor> </acceptors>",
"<connectors> <connector name=\"tcp-connector\">tcp://10.10.10.2:61617</connector> </connectors>",
"<acceptors> <acceptor name=\"http-acceptor\">tcp://10.10.10.1:80</acceptor> </acceptors>",
"<connectors> <connector name=\"http-connector\">tcp://10.10.10.2:80</connector> </connectors>",
"<acceptors> <acceptor name=\"in-vm-acceptor\">vm://0</acceptor> </acceptors>",
"<connectors> <connector name=\"in-vm-connector\">vm://0</connector> </connectors>"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/configuring_amq_broker/assembly-br-configuring-acceptors-and-connectors-network-connections_configuring |
Chapter 11. Security | Chapter 11. Security TLS 1.2 support added to all system components With the addition of TLS 1.2 support to the GnuTLS component, Red Hat Enterprise Linux 6 offers complete support for TLS 1.2 in the shipped security libraries: OpenSSL , NSS , and GnuTLS . Several modern standards such as PCI-DSS v3.1 recommend the latest TLS protocol, which is currently TLS 1.2. This addition allows you to use Red Hat Enterprise Linux 6 with future revisions of security standards, which may require TLS 1.2 support. For more information about the cryptographic changes in the Red Hat Enterprise Linux 6, see this article on the Red Hat Customer Portal: https://access.redhat.com/blogs/766093/posts/2787271 . (BZ#1339222) OpenSCAP 1.2.13 is NIST certified OpenSCAP 1.2.13 has been certified by the National Institute of Standards and Technology's (NIST) Security Content Automation Protocol (SCAP) 1.2 in the Authenticated Configuration Scanner category with the Common Vulnerabilities and Exposure (CVE) option. OpenSCAP provides a library that can parse and evaluate each component of the SCAP standard. This makes creating new SCAP tools convenient. Also, OpenSCAP offers a multi-purpose tool designed to format content into documents or scan a system based on this content. (BZ#1364207) vsftpd now uses TLS 1.2 by default Users of the Very Secure File Transfer Protocol (FTP) daemon ( vsftpd ) can select a specific version of TLS protocol up to 1.2. TLS 1.2 has been enabled by default to bring security of vsftpd to the same level as the same package in Red Hat Enterprise Linux 7. New default ciphers specific to TLS 1.2 has been added: ECDHE-RSA-AES256-GCM-SHA384 and ECDHE-ECDSA-AES256-GCM-SHA384 . These changes do not break existing configurations. (BZ#1350724) auditd now supports incremental_async The audit daemon now supports a new flush technique called incremental_async . This new mode significantly improves the audit daemon's logging performance maintaining short flush intervals for security. (BZ#1369249) scap-security-guide now supports ComputeNode The scap-security-guide project now supports scanning of the ComputeNode variant of Red Hat Enterprise Linux and the scap-security-guide package is also distributed in the relevant channel. (BZ# 1311491 ) rsyslog7 now enables TLS 1.2 With this update, the rsyslog7 multi-threaded syslog daemon explicitly enables TLS 1.2 in the GnuTLS component. (BZ#1323199) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/new_features_security |
Chapter 12. Managing persistent volume claims | Chapter 12. Managing persistent volume claims Important Expanding PVCs is not supported for PVCs backed by OpenShift Data Foundation. 12.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 12.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 12.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Storage systems tab, select the storage system and then click Overview Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 12.4. Dynamic provisioning 12.4.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 12.4.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 12.4.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. | [
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/managing-persistent-volume-claims_osp |
Chapter 9. IPI certification tests | Chapter 9. IPI certification tests The IPI test validates whether the Host Under Test (HUT) can be controlled, accessed, deployed, and rebooted remotely by using the Red Hat OpenShift Container Platform ironic service and the HUT's bare metal management controller (BMC). The test runs in a container accessible by the node that is running the ironic service. The following RHEL and RHOCP combinations are supported: RHEL 9.2 or 9.4 with RHOCP 4.13, 4.14, or 4.15 RHEL 8 with RHOCP 4.12 The test plan consists of the following tests: 9.1. Self check test The self-check test confirms that all required software packages for certification are installed and unaltered, ensuring the test environment is ready for certification. Certification packages must not be modified for testing or any other purpose. Success Criteria The test environment includes all necessary certification packages and the packages have not been modified. 9.2. IPI test The IPI test automates power management of the server from the OpenShift console through the ironic service to the BMC. The test runs the following subtest: 9.2.1. Check and update power state subtest The check_update_power_state subtest first checks if the HUT is powered on, and then restarts the HUT. The subtest monitors the status of the HUT node every 15 seconds, for a maximum of 15 minutes. Success Criteria The HUT restarts successfully in less than 15 minutes. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/assembly-ipi-tests_rhosp-bm-pol-assisted-installer-tests |
Chapter 1. Architecture | Chapter 1. Architecture Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization) combines compute, storage, networking, and management capabilities in one deployment. RHHI for Virtualization is deployed across a number of physical machines to create a discrete cluster or pod using Red Hat Gluster Storage 3.5 and Red Hat Virtualization 4.4. The dominant use case for this deployment is in remote office branch office (ROBO) environments, where a remote office synchronizes data to a central data center on a regular basis, but does not require connectivity to the central data center to function. The following diagram shows the basic architecture of a single cluster, deployed across three physical machines. 1.1. Understanding VDO As of Red Hat Hyperconverged Infrastructure for Virtualization 1.6, you can configure a Virtual Data Optimizer (VDO) layer to provide data reduction and deduplication for your storage. VDO is supported only when enabled on new installations at deployment time, and cannot be enabled on deployments upgraded from earlier versions of RHHI for Virtualization. VDO performs following types of data reduction to reduce the space required by data. Deduplication Eliminates zero and duplicate data blocks. VDO finds duplicated data using the UDS (Universal Deduplication Service) Kernel Module. Instead of writing the duplicated data, VDO records it as a reference to the original block. The logical block address is mapped to the physical block address by VDO. Compression Reduces the size of the data by packing non-duplicate blocks together into fixed length (4 KB) blocks before writing to disk. This helps to speed up the performance for reading data from storage. At best, data can be reduced to 15% of its original size. Because reducing data has additional processing costs, enabling compression and deduplication reduces write performance. As a result, VDO is not recommended for performance sensitive workloads. Red Hat strongly recommends that you test and verify that your workload achieves the required level of performance with VDO enabled before deploying VDO in production, especially if you are using it in combination with other technology that reduces performance, such as disk encryption. If you plan to use RAID hardware in the layer below VDO, Red Hat strongly recommends using SSD/NVMe disks to avoid performance issues. If there is no use of the RAID hardware layer below VDO, spinning disks can be used. | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/architecture |
Installing on Alibaba | Installing on Alibaba OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_alibaba/index |
Chapter 54. orchestration | Chapter 54. orchestration This chapter describes the commands under the orchestration command. 54.1. orchestration build info Retrieve build information. Usage: Table 54.1. Command arguments Value Summary -h, --help Show this help message and exit Table 54.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 54.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 54.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.2. orchestration resource type list List resource types. Usage: Table 54.6. Command arguments Value Summary -h, --help Show this help message and exit --filter <key=value> Filter parameters to apply on returned resource types. This can be specified multiple times. It can be any of name, version or support_status --long Show resource types with corresponding description of each resource type. Table 54.7. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 54.8. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 54.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.3. orchestration resource type show Show details and optionally generate a template for a resource type. Usage: Table 54.11. Positional arguments Value Summary <resource-type> Resource type to show details for Table 54.12. Command arguments Value Summary -h, --help Show this help message and exit --template-type <template-type> Optional template type to generate, hot or cfn --long Show resource type with corresponding description. Table 54.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 54.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 54.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.4. orchestration service list List the Heat engines. Usage: Table 54.17. Command arguments Value Summary -h, --help Show this help message and exit Table 54.18. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 54.19. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 54.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.5. orchestration template function list List the available functions. Usage: Table 54.22. Positional arguments Value Summary <template-version> Template version to get the functions for Table 54.23. Command arguments Value Summary -h, --help Show this help message and exit --with_conditions Show condition functions for template. Table 54.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 54.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 54.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.6. orchestration template validate Validate a template Usage: Table 54.28. Command arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --show-nested Resolve parameters from nested templates as well --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --ignore-errors <error1,error2,... > List of heat errors to ignore -t <template>, --template <template> Path to the template Table 54.29. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 54.30. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.31. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 54.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.7. orchestration template version list List the available template versions. Usage: Table 54.33. Command arguments Value Summary -h, --help Show this help message and exit Table 54.34. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 54.35. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 54.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.37. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack orchestration build info [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]",
"openstack orchestration resource type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--filter <key=value>] [--long]",
"openstack orchestration resource type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--template-type <template-type>] [--long] <resource-type>",
"openstack orchestration service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack orchestration template function list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--with_conditions] <template-version>",
"openstack orchestration template validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [--show-nested] [--parameter <key=value>] [--ignore-errors <error1,error2,...>] -t <template>",
"openstack orchestration template version list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/orchestration |
Chapter 8. Command Reference | Chapter 8. Command Reference Review manual pages for Data Grid CLI commands. Tip Use help command to access manual pages directly from your CLI session. For example, to view the manual page for the get command do the following: 8.1. ADD(1) 8.1.1. NAME add - increments and decrements counters with arbitrary values. 8.1.2. SYNOPSIS add ['OPTIONS'] ['COUNTER_NAME'] 8.1.3. OPTIONS --delta ='nnn' Sets a delta to increment or decrement the counter value. Defaults to 1 . -q, --quiet ='[true|false]' Hides return values for strong counters. The default is false . 8.1.4. EXAMPLES add --delta=10 cnt_a Increments the value of cnt_a by 10 . add --delta=-5 cnt_a Decrements the value of cnt_a by 5 . 8.1.5. SEE ALSO cas(1), reset(1) 8.2. ALIAS(1) 8.2.1. NAME alias - creates or displays aliases. 8.2.2. SYNOPSIS alias ['ALIAS-NAME'='COMMAND'] 8.2.3. EXAMPLES alias q=quit Creates q as an alias for the quit command. alias Lists all defined aliases. 8.2.4. SEE ALSO config(1), unalias(1) 8.3. ALTER(1) 8.3.1. NAME alter - modifies the configuration of caches on Data Grid Server. 8.3.2. SYNOPSIS alter cache ['OPTIONS'] CACHE_NAME You can modify a cache with the alter command only if the changes are compatible with the existing configuration. For example you cannot use a replicated cache configuration to modify a distributed cache. Likewise if you create a cache configuration with a specific attribute, you cannot modify the configuration to use a different attribute instead. For example, attempting to modify cache configuration by specifying a value for the max-count attribute results in invalid configuration if the max-size is already set. 8.3.3. ALTER CACHE OPTIONS -f, --file ='FILE' Specifies a configuration file in XML, JSON or YAML format that modifies an existing configuration. Mutually exclusive with the --attribute option. --attribute ='ATTRIBUTE' Specifies an attribute to modify in an existing configuration. Press the tab key to display a list of attributes. Must be used in combination with the --value option. Mutually exclusive with the --file option. --value ='VALUE' Specifies the new value for a configuration attribute. Must be used in combination with the --attribute option. 8.3.4. EXAMPLES alter cache mycache --file=/path/to/mycache.json Modifies the configuration of a cache named mycache with the mycache.json file. alter cache mycache --attribute=clustering.remote-timeout --value=5000 Modifies the configuration of a cache named mycache so that the clustering.remote-timeout attribute has a value of '5000'. 8.3.5. SEE ALSO create(1), drop(1) 8.4. AVAILABILITY(1) 8.4.1. NAME availability - manage availability of clustered caches in network partitions. 8.4.2. SYNOPSIS availability ['OPTIONS'] ['CACHE_NAME'] 8.4.3. OPTIONS --mode ='[AVAILABLE|DEGRADED_MODE]' Sets cache availability to AVAILABLE or DEGRADED_MODE when using either the DENY_READ_WRITES or ALLOW_READS partition handling strategy. AVAILABLE makes caches available to all nodes in a network partition. DEGRADED_MODE prevents read and write operations on caches when network partitions occur. 8.4.4. EXAMPLES availability cache1 Gets the current availability of the cache 'cache1'. availability --mode=AVAILABLE cache1 Sets the availability of the cache 'cache1' to AVAILABLE. 8.5. BACKUP(1) 8.5.1. NAME backup - manage container backup creation and restoration. 8.5.2. SYNOPSIS backup create ['OPTIONS'] backup delete ['OPTIONS'] BACKUP_NAME backup get ['OPTIONS'] BACKUP_NAME backup ls backup restore ['OPTIONS'] BACKUP_PATH 8.5.3. BACKUP CREATE OPTIONS -d, --dir ='PATH' Specifies a directory on the server to create and store the backup archive. -n, --name ='NAME' Defines a name for the backup archive. --caches ='cache1,cache2,... ' Lists caches to back up. Use '*' to back up all caches. --templates ='template1,template2,... ' Lists cache templates to back up. Use '*' to back up all templates. --counters ='counter1,counter2,... ' Lists of counters to back up. Use '*' to back up all counters. --proto-schemas ='schema1,schema2,... ' Lists Protobuf schemas to back up. Use '*' to back up all schemas. --tasks ='task1,task2,... ' Lists server tasks to back up. Use '*' to back up all tasks. 8.5.4. BACKUP GET OPTIONS --no-content Does not download content. The command returns only when the backup operation is complete. 8.5.5. BACKUP RESTORE OPTIONS -u, --upload Defines the path to a local backup archive that is uploaded to the server. -n, --name ='NAME' Defines a name for the restore request. --caches ='cache1,cache2,... ' Lists caches to restore. Use '*' to restore all caches from the backup archive. --templates ='template1,template2,... ' Lists cache templates to restore. Use '*' to restore all templates from the backup archive. --counters ='counter1,counter2,... ' Lists counters to restore. Use '*' to restore all counters from the backup archive. --proto-schemas ='schema1,schema2,... ' Lists Protobuf schemas to restore. Use '*' to restore all schemas from the backup archive. --tasks ='task1,task2,... ' Lists server tasks to restore. Use '*' to restore all tasks from the backup archive. 8.5.6. EXAMPLES backup create -n example-backup Initiates a backup of all container content with name example-backup . backup create -d /some/server/dir Initiates a backup of all container content and stores it on the server at path /some/server/dir . backup create --caches=* --templates=* Initiates a backup that contains only cache and cache configuration resources. backup create --proto-schemas=schema1,schema2 Initiates a backup that contains the named schema resources only. backup ls Lists all backups available on the server. backup get example-backup Downloads the example-backup archive from the server. If the backup operation is in progress, the command waits for it to complete. backup restore /some/path/on/the/server Restores all content from a backup archive on the server. backup restore -u /some/local/path Restores all content from a local backup archive that is uploaded to the server. backup restore /some/path/on/the/server --caches=* Restores only cache content from a backup archive on the server. backup restore /some/path/on/the/server --proto-schemas=schema1,schema2 Restores only the named schema resources from a backup archive on the server. backup delete example-backup Deletes the example-backup archive from the server. 8.5.7. SEE ALSO drop(1) 8.6. BENCHMARK(1) 8.6.1. NAME benchmark - runs a performance benchmark against a cache. You can run performance benchmarks for the following HTTP and Hot Rod protocols: http , https , hotrod , and hotrods . You specify the protocol for the benchmark with a URI. If you do not specify a protocol, the benchmark uses the URI of the current CLI connection. Benchmarks for Hot Rod URIs connect to the entire cluster. For HTTP URIs, benchmarks connect to a single node only. Benchmarks test performance against an existing cache. Before you run a benchmark, you should create a cache with the capabilities you want to measure. For example, if you want to evaluate the performance of cross-site replication, you should create a cache that has backup locations. If you want to test the performance of persistence, create a cache that uses an appropriate cache store. 8.6.2. SYNOPSIS benchmark ['OPTIONS'] [ uri ] 8.6.3. BENCHMARK OPTIONS -t, --threads ='num' Specifies the number of threads to create. Defaults to 10 . --cache ='cache' Names the cache against which the benchmark is performed. Defaults to benchmark . You must create the cache before running the benchmark if it does not already exist. *--key-size ='num' Sets the size, in bytes, of the key. Defaults to 16 bytes. *--value-size ='num' Sets the size, in bytes, of the value. Defaults to 1000 bytes. *--keyset-size ='num' Defines the size, in bytes, of the test key set. Defaults to 1000 . --verbosity =['SILENT', 'NORMAL', 'EXTRA'] Specifies the verbosity level of the output. Possible values, from least to most verbose, are SILENT , NORMAL , and EXTRA . The default is NORMAL . -c, --count ='num' Specifies how many measurement iterations to perform. Defaults to 5 . --time ='time' Sets the amount of time, in seconds, that each iteration takes. Defaults to 10 . --warmup-count ='num' Specifies how many warmup iterations to perform. Defaults to 5 . --warmup-time ='time' Sets the amount of time, in seconds, that each warmup iteration takes. Defaults to 1 . --mode ='mode' Specifies the benchmark mode. Possible values are Throughput , AverageTime , SampleTime , SingleShotTime , and All . The default is Throughput . --time-unit ='unit' Specifies the time unit for results in the benchmark report. Possible values are NANOSECONDS , MICROSECONDS , MILLISECONDS , and SECONDS . The default is MICROSECONDS . 8.6.4. EXAMPLES benchmark hotrod://localhost:11222 Performs a benchmark test with the Hot Rod protocol. benchmark --value-size=10000 --cache=largecache hotrod://localhost:11222 Performs a benchmark test with the Hot Rod protocol against the largecache cache using test values that are 10000 bytes in size. benchmark --mode=All --threads=20 https://user:password@server:11222 Performs a benchmark test with the HTTPS protocol using 20 threads and includes all modes in the report. 8.7. CACHE(1) 8.7.1. NAME cache - selects the default cache for subsequent commands. 8.7.2. SYNOPSIS cache ['CACHE_NAME'] 8.7.3. EXAMPLE cache mycache Selects mycache and is the same as navigating the resource tree using cd caches/mycache . 8.7.4. SEE ALSO cd(1), clear(1), container(1), get(1), put(1), remove(1) 8.8. CAS(1) 8.8.1. NAME cas - performs 'compare-and-swap' operations on strong counters. 8.8.2. SYNOPSIS cas ['OPTIONS'] ['COUNTER_NAME'] 8.8.3. OPTIONS --expect ='nnn' Specifies the expected value of the counter. --value ='nnn' Sets a new value for the counter. -q, --quiet ='[true|false]' Hides return values. The default is false. 8.8.4. EXAMPLE cas --expect=10 --value=20 cnt_a Sets the value of cnt_a to 20 only if the current value is 10 8.8.5. SEE ALSO add(1), cas(1), reset(1) 8.9. CD(1) 8.9.1. NAME cd - navigates the server resource tree. 8.9.2. DESCRIPTION PATH can be absolute or relative to the current resource. ../ specifies parent resources. 8.9.3. SYNOPSIS cd ['PATH'] 8.9.4. EXAMPLE cd caches Changes to the caches path in the resource tree. 8.9.5. SEE ALSO cache(1), ls(1), container(1) 8.10. CLEARCACHE(1) 8.10.1. NAME clearcache - removes all entries from a cache. 8.10.2. SYNOPSIS clearcache ['CACHE_NAME'] 8.10.3. EXAMPLES clearcache mycache Removes all entries from mycache . 8.10.4. SEE ALSO cache(1), drop(1), remove(1) 8.11. CONFIG(1) 8.11.1. NAME config - manages CLI configuration properties. 8.11.2. SYNOPSIS config config set 'name' 'value' config get 'name' config convert --outputFormat=[xml|json|yaml] [-o outputFile] [inputFile] 8.11.3. DESCRIPTION Manage (list, set, get) CLI configuration properties and provide configuration conversion between the different formats (XML, JSON, YAML) 8.11.4. COMMAND SYNOPSIS config Lists all configuration properties that are set. config set 'name' ['value'] Sets the value of a specific property. If you do not specify a value, the property is not set. config get 'name' Retrieves the value of a specific property. config reset Resets all properties to their default values. config convert --format=[xml|json|yaml] [-o outputFile] [inputFile] Converts a configuration file to a different format. 8.11.5. COMMON OPTIONS These options apply to all commands: -h, --help Displays a help page for the command or sub-command. 8.11.6. CONVERT OPTIONS The following options apply to the convert command: -f, --format ='xml|json|yaml' Specifies the format for the conversion. -o, --output ='path' Specifies the path to the output file. Uses standard output ( stdout ) if you do not specify a path. 8.11.7. PROPERTIES autoconnect-url Specifies the URL to which the CLI automatically connects on startup. autoexec Specifies the path of a CLI batch file to execute on startup. trustall Specifies whether to trust all server certificates. Values are false (default) and true . truststore Defines the path to a keystore that contains a certificate chain that verifies server identity. truststore-password Specifies a password to access the truststore. keystore Defines a path to the keystore, which contains a certificate. The certificate identifies the client. Use the keystore property when the server requires client certificate authentication. keystore-password Specifies a password to access the keystore. 8.11.8. EXAMPLES config set autoconnect-url http://192.0.2.0:11222 Connects to a server at a custom IP address when you start the CLI. config get autoconnect-url Returns the value for the autoconnect-url configuration property. config set autoexec /path/to/mybatchfile Runs a batch file named "mybatchfile" when you start the CLI. config set trustall true Trusts all server certificates. config set truststore /home/user/my-trust-store.jks Specifies the path of a keystore named "my-trust-store.jks". config set truststore-password secret Sets the keystore password, if required. config convert -f yaml -o infinispan.yaml infinispan.xml Converts the infinispan.xml file to YAML and writes the output to the infinispan.yaml file. config convert -f json Converts the configuration from standard input to JSON, and writes the output to standard output. 8.11.9. SEE ALSO alias(1), unalias(1) 8.12. CONNECT(1) 8.12.1. NAME connect - connects to running Data Grid servers. 8.12.2. DESCRIPTION Defaults to http://localhost:11222 and prompts for credentials if authentication is required. 8.12.3. SYNOPSIS connect ['OPTIONS'] ['SERVER_LOCATION'] 8.12.4. OPTIONS -u, --username ='USERNAME' Specifies a username to authenticate with Data Grid servers. -p, --password ='PASSWORD' Specifies passwords. -t, --truststore ='PATH' Specifies a truststore. -s, --truststore-password ='PASSWORD' Specifies a password for the truststore. -k, --keystore ='PATH' Specifies a keystore that contains a client certificate. -w, --keystore-password ='PASSWORD' Specifies a password for the keystore. --hostname-verifier ='REGEX' A regular expression that matches hostnames during a connection to an SSL/TLS-enabled server. --trustall Trusts all certificates. --context-path ='PATH' The context path for the server REST connector. If unspecified, defaults to /rest . 8.12.5. EXAMPLE connect 127.0.0.1:11322 -u test -p changeme Connects to a locally running server using a port offset of 100 and example credentials. 8.12.6. SEE ALSO disconnect(1) 8.13. CONTAINER(1) 8.13.1. NAME container - selects the container for running subsequent commands. 8.13.2. SYNOPSIS container ['CONTAINER_NAME'] 8.13.3. EXAMPLE container default Selects the default container and is the same as navigating the resource tree using cd containers/default . 8.13.4. SEE ALSO cd(1), clear(1), container(1), get(1), put(1), remove(1) 8.14. COUNTER(1) 8.14.1. NAME counter - selects the default counter for subsequent commands. 8.14.2. SYNOPSIS counter ['COUNTER_NAME'] 8.14.3. EXAMPLE counter cnt_a Selects cnt_a and is the same as navigating the resource tree using cd counters/cnt_a . 8.14.4. SEE ALSO add(1), cas(1) 8.15. CREATE(1) 8.15.1. NAME create - creates caches and counters on Data Grid servers. 8.15.2. SYNOPSIS create cache ['OPTIONS'] CACHE_NAME create counter ['OPTIONS'] COUNTER_NAME 8.15.3. CREATE CACHE OPTIONS -f, --file ='FILE' Specifies a configuration file in XML, JSON or YAML format. -t, --template ='TEMPLATE' Specifies a configuration template. Use tab autocompletion to see available templates. -v, --volatile ='[true|false]' Specifies whether the cache is persistent or volatile. The default is false. 8.15.4. CREATE COUNTER OPTIONS -t, --type ='[weak|strong]' Specifies if the counter is weak or strong. -s, --storage ='[PERSISTENT|VOLATILE]' Specifies whether the counter is persistent or volatile. -c, --concurrency-level ='nnn' Sets the concurrency level of the counter. -i, --initial-value ='nnn' Sets the initial value of the counter. -l, --lower-bound ='nnn' Sets the lower bound of a strong counter. -u, --upper-bound ='nnn' Sets the upper bound of a strong counter. 8.15.5. EXAMPLES create cache --template=org.infinispan.DIST_SYNC mycache Creates a cache named mycache from the DIST_SYNC template. create counter --initial-value=3 --storage=PERSISTENT --type=strong cnt_a Creates a strong counter named cnt_a . 8.15.6. SEE ALSO drop(1) 8.16. CREDENTIALS(1) 8.16.1. NAME credentials - manages keystores that contain Data Grid Server credentials 8.16.2. SYNOPSIS credentials ls credentials add 'alias' credentials remove 'alias' credentials mask -i iterations -s salt secret 8.16.3. DESCRIPTION List, create, and remove credentials inside a keystore and mask keystore passwords. By default, commands manage the credentials.pfx keystore in the server configuration directory. 8.16.4. SYNOPSIS credentials ls Lists credential aliases stored in the keystore. Add a credential credentials add 'alias' Adds an alias and corresponding credential to the keystore. Remove a credential credentials remove 'alias' Deletes an alias and corresponding credential from the keystore. credentials mask -i iterations -s salt 'secret' Obscure the keystore password with a mask for additional security. 8.16.5. OPTIONS -h, --help Prints command help. -s, --server-root ='path-to-server-root' Specifies the path to the server root directory. Defaults to server . --path ='credentials.pfx' Specifies the path to the credential keystore. Defaults to the server configuration directory, server/conf . -p, --password ='password' Specifies a password for the credential keystore. -t, --type ='PKCS12' Specifies the type of keystore that contains credentials. Supported types are PKCS12 or JCEKS . Defaults to PKCS12 . 8.16.6. CREDENTIALS ADD OPTIONS -c, --credential ='credential' Specifies the credential to store. 8.16.7. CREDENTIALS MASK OPTIONS -i, --iteration ='n' Sets the number of iterations. -s, --salt ='salt' Sets the salt and must be of length 8. 8.16.8. EXAMPLES credentials add dbpassword -c changeme -p "secret1234!" Creates a new default credential keystore, if does not already exist, and adds an alias of "dbpassword" for a password of "changeme". This command also sets "secret1234!" as the password for the credential keystore, which must match the password in the server configuration: <clear-text-credential clear-text="secret1234!"/> credentials ls -p "secret1234!" Lists all aliases in the default credential keystore. credentials add ldappassword -t JCEKS -p "secret1234!" Creates a credential keystore in JCEKS format and adds an alias "ldappassword". This command prompts you to specify the password that corresponds to the alias. credentials mask "secret1234!" -i 100 -s pepper99 Creates a masked representation of the credential "secret1234!" using 100 iterations using the string pepper99 as salt. 8.17. DESCRIBE(1) 8.17.1. NAME describe - displays information about resources. 8.17.2. SYNOPSIS describe ['PATH'] 8.17.3. EXAMPLES describe //containers/default Displays information about the default container. describe //containers/default/caches/mycache Displays information about the mycache cache. describe //containers/default/caches/mycache/k1 Displays information about the k1 key. describe //containers/default/counters/cnt1 Displays information about the cnt1 counter. 8.17.4. SEE ALSO cd(1), ls(1) 8.18. DISCONNECT(1) 8.18.1. NAME disconnect - ends CLI sessions with Data Grid servers. 8.18.2. SYNOPSIS disconnect 8.18.3. EXAMPLE disconnect Ends the current CLI session. 8.18.4. SEE ALSO connect(1) 8.19. DROP(1) 8.19.1. NAME drop - deletes caches and counters. 8.19.2. SYNOPSIS drop cache CACHE_NAME drop counter COUNTER_NAME 8.19.3. EXAMPLES drop cache mycache Deletes the mycache cache. drop counter cnt_a Deletes the cnt_a counter. 8.19.4. SEE ALSO create(1), clearcache(1) 8.20. ENCODING(1) 8.20.1. NAME encoding - displays and sets the encoding for cache entries. 8.20.2. DESCRIPTION Sets a default encoding for put and get operations on a cache. If no argument is specified, the encoding command displays the current encoding. Valid encodings use standard MIME type (IANA media types) naming conventions, such as the following: text/plain application/json application/xml application/octet-stream 8.20.3. SYNOPSIS encoding ['ENCODING'] 8.20.4. EXAMPLE encoding application/json Configures the currently selected cache to encode entries as application/json . 8.20.5. SEE ALSO get(1), put(1) 8.21. GET(1) 8.21.1. NAME get - retrieves entries from a cache. 8.21.2. SYNOPSIS get ['OPTIONS'] KEY 8.21.3. OPTIONS -c, --cache ='NAME' Specifies the cache from which to retrieve entries. Defaults to the currently selected cache. 8.21.4. EXAMPLE get hello -c mycache Retrieves the value of the key named hello from mycache . 8.21.5. SEE ALSO query(1), put(1) 8.22. HELP(1) 8.22.1. NAME help - prints manual pages for commands. 8.22.2. SYNOPSIS help ['COMMAND'] 8.22.3. EXAMPLE help get Prints the manual page for the get command. 8.22.4. SEE ALSO version(1) 8.23. INDEX(1) 8.23.1. NAME index - manages cache indexes. 8.23.2. SYNOPSIS index reindex 'cache-name' index clear 'cache-name' index update-schema 'cache-name' index stats 'cache-name' index clear-stats 'cache-name' 8.23.3. EXAMPLES index reindex mycache Reindexes a cache. index clear mycache Clears a cache index. index update-schema mycache Updates the index schema for a cache. index stats mycache Shows indexing and search statistics for a cache. index clear-stats mycache Clears indexing and search statistics for a cache. 8.23.4. SEE ALSO query(1) 8.24. INSTALL(1) 8.24.1. NAME install - download and install artifacts for Data Grid Server. 8.24.2. DESCRIPTION Download and install artifacts to the server/lib directory. You can specify the download location for artifacts as Maven artifact coordinates, a URL, or a local file path. When downloading Maven artifacts, an optional Maven settings.xml file determines the location of the remote and local repositories as well as credentials and proxy configuration. If you download artifacts as zip , tar.gz , or tgz archives, the content is extracted. 8.24.3. SYNOPSIS install 'artifact-1[[|algorithm]|checksum]' ['artifact-2[[|algorithm]|checksum]'... ] 8.24.4. ARTIFACT NAMES Artifact names can be any of the following: Maven coordinates using the groupId:artifactId:version format, for example org.postgresql:postgresql:42.3.1 . HTTP, HTTPS, or FTP URLs Local paths 8.24.5. CHECKSUM VALIDATION You can validate the checksum of an artifact after download. The algorithm defaults to SHA-256 but it can also be MD-5 , SHA-1 , SHA-256 , SHA-384 , or `SHA-512'. 8.24.6. PATCH LIST OPTIONS --server-home ='path/to/server' Sets the path of the server installation. --server-root ='server' Sets the server root directory relative to the server home. *--maven-settings='USDHOME/.m2/maven-settings.xml' Sets the path of a Maven settings.xml file and uses the default location, if not specified. -o, --overwrite Forces overwriting of artifacts in the server/lib directory. By default artifacts are not overwritten, which causes the installation to fail if an artifact already exists. -v, --verbose Shows verbose information about artifact downloads. -f, --force Forces download of remote artifacts, even if they are already present locally. -r, --retries=num The number of retries in case the downloaded artifacts do not match the supplied checksums. --clean Deletes all the contents of the server/lib directory before downloading artifacts. 8.24.7. EXAMPLES install -o org.postgresql:postgresql:42.3.1 Installs the PostgreSQL JDBC driver JAR and overwrites if it already exists. install https://example.org/artifact.zip Downloads the artifact.zip and extracts the contents. install https://example.org/artifact.zip|52d73f9b3611610ebbb4dca7c2ac1171218eb09891c1faba10f5f54c1d2acc13 Downloads the artifact.zip , verifies its SHA-256 checksum, and extracts the contents. install https://example.org/artifact.zip|MD5|2b48d1871ee26f969d8481db94e103c2 Downloads the artifact.zip , verifies its MD-5 checksum, and extracts the contents. 8.25. LOGGING(1) 8.25.1. NAME logging - inspects and manipulates the Data Grid server runtime logging configuration. 8.25.2. SYNOPSIS logging list-loggers logging list-appenders logging set ['OPTIONS'] [ LOGGER_NAME ] logging remove LOGGER_NAME 8.25.3. LOGGING SET OPTIONS -l, --level ='OFF|TRACE|DEBUG|INFO|WARN|ERROR|ALL' Specifies the logging level for the specific logger. -a, --appender ='APPENDER' Specifies an appenders to set on the specific logger. The option can be repeated for multiple appenders. Note calling logging set without a logger name will modify the root logger. 8.25.4. EXAMPLES logging list-loggers Lists all available loggers logging set --level=DEBUG --appenders=FILE org.infinispan Sets the log level for the org.infinispan logger to DEBUG and configures it to use the FILE appender. 8.26. LS(1) 8.26.1. NAME ls - lists resources for the current path or a given path. 8.26.2. SYNOPSIS ls ['PATH'] 8.26.3. OPTIONS -f, --format ='[NAMES|VALUES|FULL]' This option currently only applies when listing caches. NAMES : only show the keys VALUES : show the keys and values FULL : show keys, values and metadata -l This option only applies when listing caches. Shortcut for -f FULL . -p, --pretty-print ='[TABLE|CSV|JSON]' Prints the output using one of the following layouts: TABLE : tabular format. The column sizes are determined by the terminal width. This is the default. CSV : comma-separated values. JSON : JSON format. -m, --max-items ='num' This option only applies when listing caches. The maximum number of items to show. Defaults to -1 (unlimited). 8.26.4. EXAMPLES ls caches Lists the available caches. ls ../ Lists parent resources. ls -l --pretty-print=CSV /containers/default/caches/mycache > mycache.csv Lists the content of a cache, including keys, values and metadata and redirects the contents to a file. 8.26.5. SEE ALSO cd(1) 8.27. MIGRATE(1) 8.27.1. NAME migrate - migrates data from one version of Data Grid to another. 8.27.2. SYNOPSIS migrate cluster connect migrate cluster synchronize migrate cluster disconnect migrate cluster source-connection 8.27.3. DESCRIPTION Use the migrate command to migrate data from one version of Data Grid to another. 8.27.4. COMMAND SYNOPSIS Migrate clusters migrate cluster connect Connects the target cluster to the source cluster. migrate cluster synchronize Synchronize data between the source cluster and the target cluster. migrate cluster disconnect Disconnects the target cluster from the source cluster. migrate cluster source-connection Gets connection configuration of the target cluster. The command will print "Not Found" if the connections hasn't been established. 8.27.5. COMMON OPTIONS These options apply to all commands: -h, --help Displays a help page for the command or sub-command. CLUSTER CONNECT OPTIONS -c, --cache ='name' The name of the cache to disconnect from the source. 8.27.6. CLUSTER CONNECTION OPTIONS -c, --cache ='name' The name of the cache to obtain the connection configuration. 8.28. PATCH(1) 8.28.1. NAME patch - manages server patches. 8.28.2. DESCRIPTION List, describe, install, rollback, and create server patches. Patches are zip archive files that contain artifacts to upgrade servers and resolve issues or add new features. Patches can apply target versions to multiple server installations with different versions. 8.28.3. SYNOPSIS patch ls patch install 'patch-file' patch describe 'patch-file' patch rollback patch create 'patch-file' 'target-server' 'source-server-1' ['source-server-2'... ] 8.28.4. PATCH LIST OPTIONS --server ='path/to/server' Sets the path to a target server outside the current server home directory. -v, --verbose Shows the content of each installed patch, including information about individual files. 8.28.5. PATCH INSTALL OPTIONS --dry-run Shows the operations that the patch peforms without applying any changes. --server ='path/to/server' Sets the path to a target server outside the current server home directory. 8.28.6. PATCH DESCRIBE OPTIONS -v, --verbose Shows the content of the patch, including information about individual files 8.28.7. PATCH ROLLBACK OPTIONS --dry-run Shows the operations that the patch peforms without applying any changes. --server ='path/to/server' Sets the path to a target server outside the current server home directory. 8.28.8. PATCH CREATE OPTIONS -q, --qualifier ='name' Specifies a descriptive qualifier string for the patch; for example, 'one-off for issue nnnn'. 8.28.9. EXAMPLES patch ls Lists the patches currently installed on a server in order of installation. patch install mypatch.zip Installs "mypatch.zip" on a server in the current directory. patch install mypatch.zip --server=/path/to/server/home Installs "mypatch.zip" on a server in a different directory. patch describe mypatch.zip Displays the target version and list of source versions for "mypatch.zip". patch create mypatch.zip 'target-server' 'source-server-1' ['source-server-2'... ] Creates a patch file named "mypatch.zip" that uses the version of the target server and applies to the source server versions. patch rollback Rolls back the last patch that was applied to a server and restores the version. 8.29. PUT(1) 8.29.1. NAME put - adds or updates cache entries. 8.29.2. DESCRIPTION Creates entries for new keys. Replaces values for existing keys. 8.29.3. SYNOPSIS put ['OPTIONS'] KEY [ VALUE ] 8.29.4. OPTIONS -c, --cache ='NAME' Specifies the name of the cache. Defaults to the currently selected cache. -e, --encoding ='ENCODING' Sets the media type for the value. -f, --file ='FILE' Specifies a file that contains the value for the entry. -l, --ttl ='TTL' Sets the number of seconds before the entry is automatically deleted (time-to-live). Defaults to the value for lifespan in the cache configuration if 0 or not specified. If you set a negative value, the entry is never deleted. -i, --max-idle ='MAXIDLE' Sets the number of seconds that the entry can be idle. If a read or write operation does not occur for an entry after the maximum idle time elapses, the entry is automatically deleted. Defaults to the value for maxIdle in the cache configuration if 0 or not specified. If you set a negative value, the entry is never deleted. -a, --if-absent =[true|false] Puts an entry only if it does not exist. 8.29.5. EXAMPLES put -c mycache hello world Adds the hello key with a value of world to the mycache cache. put -c mycache -f myfile -i 500 hola Adds the hola key with the value from the contents of myfile . Also sets a maximum idle of 500 seconds. 8.29.6. SEE ALSO get(1), remove(1) 8.30. QUERY(1) 8.30.1. NAME query - performs Ickle queries to match entries in remote caches. 8.30.2. SYNOPSIS query ['OPTIONS'] QUERY_STRING 8.30.3. OPTIONS -c, --cache ='NAME' Specifies the cache to query. Defaults to the currently selected cache. --max-results ='MAX_RESULTS' Sets the maximum number of results to return. The default is 10 . -o, --offset ='OFFSET' Specifies the index of the first result to return. The default is 0 . 8.30.4. EXAMPLES query "from org.infinispan.example.Person p where p.gender = 'MALE'" Queries values in a remote cache to find entries from a Protobuf Person entity where the gender datatype is MALE . 8.30.5. SEE ALSO index(1) schema(1) 8.31. QUIT(1) 8.31.1. NAME quit - exits the command line interface. 8.31.2. SYNOPSIS quit exit and bye are command aliases. 8.31.3. EXAMPLE quit Ends the CLI session. exit Ends the CLI session. bye Ends the CLI session. 8.31.4. SEE ALSO disconnect(1), shutdown(1) 8.32. REBALANCE(1) 8.32.1. NAME rebalance - manages automatic rebalancing for caches 8.32.2. SYNOPSIS rebalance enable ['PATH'] rebalance disable ['PATH'] 8.32.3. EXAMPLES rebalance enable Enables automatic rebalancing in the current context. Running this command in the root context enables rebalancing for all caches. rebalance enable caches/mycache Enables automatic rebalancing for the cache named mycache . rebalance disable Disables automatic rebalancing in the current context. Running this command in the root context disables rebalancing for all caches. rebalance disable caches/mycache Disables automatic rebalancing for the cache named mycache . 8.33. REMOVE(1) 8.33.1. NAME remove - deletes entries from a cache. 8.33.2. SYNOPSIS remove KEY ['OPTIONS'] 8.33.3. OPTIONS --cache ='NAME' Specifies the cache from which to remove entries. Defaults to the currently selected cache. 8.33.4. EXAMPLE remove --cache=mycache hola Deletes the hola entry from the mycache cache. 8.33.5. SEE ALSO cache(1), drop(1), clearcache(1) 8.34. RESET(1) 8.34.1. NAME reset - restores the initial values of counters. 8.34.2. SYNOPSIS reset ['COUNTER_NAME'] 8.34.3. EXAMPLE reset cnt_a Resets the cnt_a counter. 8.34.4. SEE ALSO add(1), cas(1), drop(1) 8.35. SCHEMA(1) 8.35.1. NAME schema - manipulates Protobuf schemas. 8.35.2. SYNOPSIS schema ls schema upload --file=/path/to/schema.proto schema.proto schema remove schema.proto schema get schema.proto 8.35.3. DESCRIPTION Manage schemas with the ls , upload , get , remove subcommands. 8.35.4. COMMAND SYNOPSIS schema ls Lists the schemas installed in the server. schema upload --file='/path/to/schema.proto' 'schema.proto' Uploads a ProtoBuf schema file to the server. schema get 'schema.proto' Shows the content of the specified schema. schema remove 'schema.proto' Removes the specified schema from the server. 8.35.5. UPLOAD OPTIONS -f, --file ='FILE' Uploads a file as a protobuf schema with the given name. 8.35.6. EXAMPLE schema upload --file=person.proto person.proto Registers a person.proto Protobuf schema. 8.35.7. SEE ALSO query(1) 8.36. SERVER(1) 8.36.1. NAME server - server configuration and state management. 8.36.2. DESCRIPTION The server command describes and manages server endpoint connectors and datasources and retrieves aggregated diagnostic reports about both the server and host. Reports provide details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files. 8.36.3. SYNOPSIS server report server heap-dump [--live] server connector ls server connector describe 'connector-name' server connector start 'connector-name' server connector stop 'connector-name' server connector ipfilter ls 'connector-name' server connector ipfilter set 'connector-name' --rules='[ACCEPT|REJECT]/cidr',... server connector ipfilter clear 'connector-name' server datasource ls server datasource test 'datasource-name' 8.36.4. SERVER CONNECTOR IPFILTER OPTIONS --rules ='[ACCEPT|REJECT]/cidr',... One or more IP filtering rules. 8.36.5. EXAMPLES server report Obtains a server report, including information about network, threads, memory, etc. server heap-dump Generates a JVM heap dump in the server data directory, returning the name of the generated file. server connector ls Lists all available connectors on the server. server connector describe endpoint-default Shows information about the specified connector, including host, port, local and global connections, IP filtering rules. server connector stop my-hotrod-connector Stops a connector dropping all established connections across the cluster. This command will be refused if attempting to stop the connector which is handling the request. server connector start my-hotrod-connector Starts a connector so that it can accept connections across the cluster. server connector ipfilter ls my-hotrod-connector Lists all IP filtering rules active on a connector across the cluster. server connector ipfilter set my-hotrod-connector --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8 Sets IP filtering rules on a connector across the cluster. Replaces all existing rules. This command will be refused if one of the rejection rules matches the address of the connection on which it is invoked. server connector ipfilter clear my-hotrod-connector Removes all IP filtering rules on a connector across the cluster. server datasource ls Lists all available datasources on the server. server datasource test my-datasource Performs a test connection on the datasource. 8.37. SHUTDOWN(1) 8.37.1. NAME shutdown - stops server instances and clusters. 8.37.2. SYNOPSIS shutdown server ['SERVERS'] shutdown cluster shutdown container 8.37.3. EXAMPLES shutdown server Stops the server to which the CLI is connected. shutdown server my_server01 Stops the server with hostname my_server01 . shutdown cluster Stops all nodes in the cluster after storing cluster state and persisting entries if there is a cache store. shutdown container Stops the data container without terminating the server process. Stores cluster state and persists entries if there is a cache store. Server instances remain running with active endpoints and clustering. REST calls to container resources will result in a 503 Service Unavailable response. The shutdown container command is intended for environments, such as Kubernetes, that automate resource lifecycle management. For self-managed environments you should use the shutdown server or shutdown cluster commands to stop servers. 8.37.4. SEE ALSO connect(1), disconnect(1), quit(1) 8.38. SITE(1) 8.38.1. NAME site - manages backup locations and performs cross-site replication operations. 8.38.2. SYNOPSIS site status ['OPTIONS'] site bring-online ['OPTIONS'] site take-offline ['OPTIONS'] site push-site-state ['OPTIONS'] site cancel-push-state ['OPTIONS'] site cancel-receive-state ['OPTIONS'] site push-site-status ['OPTIONS'] site state-transfer-mode get|set ['OPTIONS'] site name site view site is-relay-node site relay-nodes 8.38.3. OPTIONS -c, --cache ='CACHE_NAME' Specifies a cache. -a, --all-caches Applies the command to all caches. -s, --site ='SITE_NAME' Specifies a backup location. 8.38.4. STATE TRANSFER MODE OPTIONS --mode ='MODE' Sets the state transfer mode. Values are MANUAL (default) or AUTO . 8.38.5. EXAMPLES site status --cache=mycache Returns the status of all backup locations for mycache . site status --all-caches Returns the status of each backup location for all caches with backups. site status --cache=mycache --site=NYC Returns the status of NYC for mycache . site bring-online --cache=mycache --site=NYC Brings the site NYC online for mycache . site take-offline --cache=mycache --site=NYC Takes the site NYC offline for mycache . site push-site-state --cache=mycache --site=NYC Backs up caches to remote backup locations. site push-site-status --cache=mycache Displays the status of the operation to backup mycache . site cancel-push-state --cache=mycache --site=NYC Cancels the operation to backup mycache to NYC . site cancel-receive-state --cache=mycache --site=NYC Cancels the operation to receive state from NYC . site clear-push-state-status --cache=myCache Clears the status of the push state operation for mycache . site state-transfer-mode get --cache=myCache --site=NYC Retrieves the state transfer mode for mycache to NYC . site state-transfer-mode set --cache=myCache --site=NYC --mode=AUTO Configures automatic state transfer for mycache to NYC . site name Returns the name of the local site. If cross-site replication is not configured, the name of the local site is always "local". site view Returns a list of names for all sites or an empty list ("[]") if cross-site replication is not configured. site is-relay-node Returns true if the node handles RELAY messages between clusters. site relay-nodes Returns a list of relay nodes by their logical names. 8.39. STATS(1) 8.39.1. NAME stats - displays statistics about resources. 8.39.2. SYNOPSIS stats ['PATH'] 8.39.3. EXAMPLES stats //containers/default Displays statistics about the default container. stats //containers/default/caches/mycache Displays statistics about the mycache cache. 8.39.4. SEE ALSO cd(1), ls(1), describe(1) 8.40. TASK(1) 8.40.1. NAME task - executes and uploads server-side tasks and scripts 8.40.2. SYNOPSIS task upload --file='script' 'TASK_NAME' task exec ['TASK_NAME'] 8.40.3. EXAMPLES task upload --file=hello.js hello Uploads a script from a hello.js file and names it hello . task exec @@cache@names Runs a task that returns available cache names. task exec hello -Pgreetee=world Runs a script named hello and specifies the greetee parameter with a value of world . 8.40.4. OPTIONS -P, --parameters ='PARAMETERS' Passes parameter values to tasks and scripts. -f, --file ='FILE' Uploads script files with the given names. 8.40.5. SEE ALSO ls(1) 8.41. UNALIAS(1) 8.41.1. NAME unalias - deletes aliases. 8.41.2. SYNOPSIS unalias 'ALIAS-NAME' 8.41.3. EXAMPLES unalias q Deletes the q alias. 8.41.4. SEE ALSO config(1), alias(1) 8.42. USER(1) 8.42.1. NAME user - manages Data Grid users in property security realms. 8.42.2. SYNOPSIS user ls user create 'username' user describe 'username' user remove 'username' user password 'username' user groups 'username' user encrypt-all user roles ls 'principal' user roles grant --roles='role1'[,'role2'... ] 'principal' user roles deny --roles='role1'[,'role2'... ] 'principal' user roles create --permissions='perm1'[,'perm2'... ] 'role' user roles remove 'role' 8.42.3. DESCRIPTION Manage users in property realms with the ls , create , describe , remove , password , groups and encrypt-all subcommands. List and modify principal to role mappings with the roles subcommand when using the cluster role mapper for authorization. 8.42.4. COMMAND SYNOPSIS user ls Lists the users or groups which are present in the property file. user create 'username' Creates a user after prompting for a password. user describe 'username' Describes a user, including its username, realm and any groups it belongs to. user remove 'username' Removes the specified user from the property file. user password 'username' Changes the password for a user. user groups 'username' Sets the groups to which a user belongs. user encrypt-all Encrypt all passwords in a plain-text user property file. user roles ls 'principal' Lists all roles of the specified principal (user or group). user roles grant --roles='role1'[,'role2'... ] 'principal' Grants one or more roles to a principal. user roles deny --roles='role1'[,'role2'... ] 'principal' Denies one or more roles to a principal. user roles create --permissions='perm1'[,'perm2'... ] 'role' Creates a new role with the specified permissions. user roles remove 'role' Deletes an existing role. 8.42.5. COMMON OPTIONS These options apply to all commands: -h, --help Displays a help page for the command or sub-command. -s, --server-root ='path-to-server-root' The path to the server root. Defaults to server . -f, --users-file ='users.properties' The name of the property file which contains the user passwords. Defaults to users.properties . -w, --groups-file ='groups.properties' The name of the property file which contains the user to groups mapping. Defaults to groups.properties . 8.42.6. USER CREATE/MODIFY OPTIONS -a, --algorithms Specifies the algorithms used to hash the password. -g, --groups ='group1,group2,... ' Specifies the groups to which the user belongs. -p, --password ='password' Specifies the user's password. -r, --realm ='realm' Specifies the realm name. --plain-text Whether passwords should be stored in plain-text (not recommended). 8.42.7. USER LS OPTIONS --groups Shows a list of groups instead of the users. 8.42.8. USER ENCRYPT-ALL OPTIONS -a, --algorithms Specifies the algorithms used to hash the password. 8.42.9. USER ROLES OPTIONS -p, --permissions Specifies one or more of the following permissions: LIFECYCLE , READ , WRITE , EXEC , LISTEN , BULK_READ , BULK_WRITE , ADMIN , CREATE , MONITOR , ALL , ALL_READ , ALL_WRITE 8.43. VERSION(1) 8.43.1. NAME version - displays the server version and CLI version. 8.43.2. SYNOPSIS version 8.43.3. EXAMPLE version Returns the version for the server and the CLI. 8.43.4. SEE ALSO help(1) | [
"help get",
"*-c, --cache*='name':: The name of the cache to connect to the source. *-f, --file*='FILE':: Specifies a configuration file in JSON format, containing a single 'remote-store' element. CLUSTER SYNCHRONIZE OPTIONS --------------------------- *-c, --cache*='name':: The name of the cache to synchronize. *-b, --read-batch*='num':: The amount of entries to process in a batch. Defaults to 10000. *-t, --threads*='num':: The number of threads to use. Defaults to the number of cores on the server. CLUSTER DISCONNECT OPTIONS"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_data_grid_command_line_interface/command_reference |
Chapter 49. Deprecated Functionality in Red Hat Enterprise Linux 7 | Chapter 49. Deprecated Functionality in Red Hat Enterprise Linux 7 nautilus-open-terminal replaced with gnome-terminal-nautilus Since Red Hat Enterprise Linux 7.3, the nautilus-open-terminal package has been deprecated and replaced with the gnome-terminal-nautilus package. This package provides a Nautilus extension that adds the Open in Terminal option to the right-click context menu in Nautilus. nautilus-open-terminal is replaced by gnome-terminal-nautilus during the system upgrade. sslwrap() removed from Python The sslwrap() function has been removed from Python 2.7 . After the 466 Python Enhancement Proposal was implemented, using this function resulted in a segmentation fault. The removal is consistent with upstream. Red Hat recommends using the ssl.SSLContext class and the ssl.SSLContext.wrap_socket() function instead. Most applications can simply use the ssl.create_default_context() function, which creates a context with secure default settings. The default context uses the system's default trust store, too. Symbols from libraries linked as dependencies no longer resolved by ld Previously, the ld linker resolved any symbols present in any linked library, even if some libraries were linked only implicitly as dependencies of other libraries. This allowed developers to use symbols from the implicitly linked libraries in application code and omit explicitly specifying these libraries for linking. For security reasons, ld has been changed to not resolve references to symbols in libraries linked implicitly as dependencies. As a result, linking with ld fails when application code attempts to use symbols from libraries not declared for linking and linked only implicitly as dependencies. To use symbols from libraries linked as dependencies, developers must explicitly link against these libraries as well. To restore the behavior of ld , use the -copy-dt-needed-entries command-line option. (BZ# 1292230 ) Windows guest virtual machine support limited As of Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC). libnetlink is deprecated The libnetlink library contained in the iproute-devel package has been deprecated. The user should use the libnl and libmnl libraries instead. S3 and S4 power management states for KVM are deprecated Native KVM support for the S3 (suspend to RAM) and S4 (suspend to disk) power management states has been discontinued. This feature was previously available as a Technology Preview. The Certificate Server plug-in udnPwdDirAuth is discontinued The udnPwdDirAuth authentication plug-in for the Red Hat Certificate Server has been removed in Red Hat Enterprise Linux 7.3. Profiles using the plug-in are no longer supported. Certificates created with a profile using the udnPwdDirAuth plug-in are still valid if they have been approved. Red Hat Access plug-in for IdM is discontinued The Red Hat Access plug-in for Identity Management (IdM) has been removed in Red Hat Enterprise Linux 7.3. During the update, the redhat-access-plugin-ipa package is automatically uninstalled. Features previously provided by the plug-in, such as Knowledgebase access and support case engagement, are still available through the Red Hat Customer Portal. Red Hat recommends to explore alternatives, such as the redhat-support-tool tool. The Ipsilon identity provider service for federated single sign-on The ipsilon packages were introduced as Technology Preview in Red Hat Enterprise Linux 7.2. Ipsilon links authentication providers and applications or utilities to allow for single sign-on (SSO). Red Hat does not plan to upgrade Ipsilon from Technology Preview to a fully supported feature. The ipsilon packages will be removed from Red Hat Enterprise Linux in a future minor release. Red Hat has released Red Hat Single Sign-On as a web SSO solution based on the Keycloak community project. Red Hat Single Sign-On provides greater capabilities than Ipsilon and is designated as the standard web SSO solution across the Red Hat product portfolio. For details, see Chapter 1, Overview . Deprecated Device Drivers 3w-9xxx 3w-sas mptbase mptctl mptsas mptscsih mptspi qla3xxx The following controllers from the megaraid_sas driver have been deprecated: Dell PERC5, PCI ID 0x15 SAS1078R, PCI ID 0x60 SAS1078DE, PCI ID 0x7C SAS1064R, PCI ID 0x411 VERDE_ZCR, PCI ID 0x413 SAS1078GEN2, PCI ID 0x78 The following Ethernet adapter controlled by the be2net driver has been deprecated: TIGERSHARK NIC, PCI ID 0x0700 The following controllers from the be2iscsi driver have been deprecated: Emulex OneConnect 10Gb iSCSI Initiator (generic), PCI ID 0x212 OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID 0x702 OCe10100 BE2 adapter family, PCI ID 0x703 The following Emulex boards from the lpfc driver have been deprecated: BladeEngine 2 (BE2) Devices TIGERSHARK FCOE, PCI ID 0x0704 Fibre Channel (FC) Devices FIREFLY, PCI ID 0x1ae5 PROTEUS_VF, PCI ID 0xe100 BALIUS, PCI ID 0xe131 PROTEUS_PF, PCI ID 0xe180 RFLY, PCI ID 0xf095 PFLY, PCI ID 0xf098 LP101, PCI ID 0xf0a1 TFLY, PCI ID 0xf0a5 BSMB, PCI ID 0xf0d1 BMID, PCI ID 0xf0d5 ZSMB, PCI ID 0xf0e1 ZMID, PCI ID 0xf0e5 NEPTUNE, PCI ID 0xf0f5 NEPTUNE_SCSP, PCI ID 0xf0f6 NEPTUNE_DCSP, PCI ID 0xf0f7 FALCON, PCI ID 0xf180 SUPERFLY, PCI ID 0xf700 DRAGONFLY, PCI ID 0xf800 CENTAUR, PCI ID 0xf900 PEGASUS, PCI ID 0xf980 THOR, PCI ID 0xfa00 VIPER, PCI ID 0xfb00 LP10000S, PCI ID 0xfc00 LP11000S, PCI ID 0xfc10 LPE11000S, PCI ID 0xfc20 PROTEUS_S, PCI ID 0xfc50 HELIOS, PCI ID 0xfd00 HELIOS_SCSP, PCI ID 0xfd11 HELIOS_DCSP, PCI ID 0xfd12 ZEPHYR, PCI ID 0xfe00 HORNET, PCI ID 0xfe05 ZEPHYR_SCSP, PCI ID 0xfe11 ZEPHYR_DCSP, PCI ID 0xfe12 To check the PCI IDs of the hardware on your system, run the lspci -nn command. Note that other controllers from the mentioned drivers that are not listed here remain unchanged. Containers using the libvirt-lxc tooling have been deprecated The following libvirt-lxc packages are deprecated since Red Hat Enterprise Linux 7.1: libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-login-shell Future development on the Linux containers framework is now based on the docker command-line interface. libvirt-lxc tooling may be removed in a future release of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7) and should not be relied upon for developing custom container management applications. For more information, see the Red Hat KnowledgeBase article . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/chap-red_hat_enterprise_linux-7.3_release_notes-deprecated_functionality_in_rhel7 |
Chapter 1. Insights for RHEL malware detection service overview | Chapter 1. Insights for RHEL malware detection service overview The Red Hat Insights for Red Hat Enterprise Linux malware detection service is a monitoring and assessment tool that scans RHEL systems for the presence of malware. The malware detection service incorporates YARA pattern-matching software and malware detection signatures. Signatures are provided in partnership with the IBM X-Force threat intelligence team working closely with the Red Hat threat intelligence team. In the malware detection service UI, User Access-authorized administrators and viewers can See the list of signatures against which their RHEL systems are scanned. See aggregate results for all RHEL systems with malware detection enabled in the Insights client. See results for individual systems. Know when a system shows evidence of the presence of malware. These features give security threat assessors and IT incident-response teams valuable information to prepare a response. The malware detection service does not recommend resolutions to resolve or remediate malware incidents. The strategy to take in addressing a malware threat depends on a lot of criteria and considerations specific to each system and organization. Your organization's security incident response team is best qualified to design and implement an effective mitigation and remediation strategy for each circumstance. 1.1. YARA malware signatures YARA signature detection is the cornerstone of the Insights for Red Hat Enterprise Linux malware detection service. YARA signatures are descriptions of malware types expressed as patterns. Each description consists of a set of strings and a boolean expression that define a rule. When one or more of the conditions in a signature exist on a scanned RHEL system, YARA records a hit on that system. 1.2. IBM X-Force Threat Intelligence signatures The Insights for Red Hat Enterprise Linux malware detection service includes predefined signatures developed by the IBM X-Force Threat Intelligence team to expose malware running on RHEL systems. Signatures compiled by the X-Force threat intelligence team are identifiable in the malware detection service by the XFTI - prefix, for example, XFTI_FritzFrog . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems_with_fedramp/malware-detection-overview |
Chapter 3. Ceph Object Gateway and the Swift API | Chapter 3. Ceph Object Gateway and the Swift API As a developer, you can use a RESTful application programing interface (API) that is compatible with the Swift API data access model. You can manage the buckets and objects stored in Red Hat Ceph Storage cluster through the Ceph Object Gateway. The following table describes the support status for current Swift functional features: Table 3.1. Features Feature Status Remarks Authentication Supported Get Account Metadata Supported No custom metadata Swift ACLs Supported Supports a subset of Swift ACLs List Containers Supported List Container's Objects Supported Create Container Supported Delete Container Supported Get Container Metadata Supported Add/Update Container Metadata Supported Delete Container Metadata Supported Get Object Supported Create/Update an Object Supported Create Large Object Supported Delete Object Supported Copy Object Supported Get Object Metadata Supported Add/Update Object Metadata Supported Temp URL Operations Supported CORS Not Supported Expiring Objects Supported Object Versioning Not Supported Static Website Not Supported 3.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.2. Swift API limitations Important The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum object size when using Swift API: 5GB Maximum metadata size when using Swift API: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes. 3.3. Create a Swift user To test the Swift interface, create a Swift subuser. Creating a Swift user is a two step process. The first step is to create the user. The second step is to create the secret key. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites Installation of the Ceph Object Gateway. Root-level access to the Ceph Object Gateway node. Procedure Create the Swift user: Syntax Replace NAME with the Swift user name, for example: Example Create the secret key: Syntax Replace NAME with the Swift user name, for example: Example 3.4. Swift authenticating a user To authenticate a user, make a request containing an X-Auth-User and a X-Auth-Key in the header. Syntax Example Response Note You can retrieve data about Ceph's Swift-compatible service by executing GET requests using the X-Storage-Url value during authentication. Additional Resources See the Red Hat Ceph Storage Developer Guide for Swift request headers. See the Red Hat Ceph Storage Developer Guide for Swift response headers. 3.5. Swift container operations As a developer, you can perform container operations with the Swift application programing interface (API) through the Ceph Object Gateway. You can list, create, update, and delete containers. You can also add or update the container's metadata. 3.5.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.5.2. Swift container operations A container is a mechanism for storing data objects. An account can have many containers, but container names must be unique. This API enables a client to create a container, set access controls and metadata, retrieve a container's contents, and delete a container. Since this API makes requests related to information in a particular user's account, all requests in this API must be authenticated unless a container's access control is deliberately made publicly accessible, that is, allows anonymous requests. Note The Amazon S3 API uses the term 'bucket' to describe a data container. When you hear someone refer to a 'bucket' within the Swift API, the term 'bucket' might be construed as the equivalent of the term 'container.' One facet of object storage is that it does not support hierarchical paths or directories. Instead, it supports one level consisting of one or more containers, where each container might have objects. The RADOS Gateway's Swift-compatible API supports the notion of 'pseudo-hierarchical containers', which is a means of using object naming to emulate a container, or directory hierarchy without actually implementing one in the storage system. You can name objects with pseudo-hierarchical names, for example, photos/buildings/empire-state.jpg, but container names cannot contain a forward slash ( / ) character. Important When uploading large objects to versioned Swift containers, use the --leave-segments option with the python-swiftclient utility. Not using --leave-segments overwrites the manifest file. Consequently, an existing object is overwritten, which leads to data loss. 3.5.3. Swift update a container's Access Control List (ACL) When a user creates a container, the user has read and write access to the container by default. To allow other users to read a container's contents or write to a container, you must specifically enable the user. You can also specify * in the X-Container-Read or X-Container-Write settings, which effectively enables all users to either read from or write to the container. Setting * makes the container public. That is it enables anonymous users to either read from or write to the container. Syntax Table 3.2. Request Headers Name Description Type Required X-Container-Read The user IDs with read permissions for the container. Comma-separated string values of user IDs. No X-Container-Write The user IDs with write permissions for the container. Comma-separated string values of user IDs. No 3.5.4. Swift list containers A GET request that specifies the API version and the account will return a list of containers for a particular user account. Since the request returns a particular user's containers, the request requires an authentication token. The request cannot be made anonymously. Syntax Table 3.3. Request Parameters Name Description Type Required Valid Values limit Limits the number of results to the specified value. Integer No N/A format Defines the format of the result. String No json or xml marker Returns a list of results greater than the marker value. String No N/A The response contains a list of containers, or returns with an HTTP 204 response code Table 3.4. Response Entities Name Description Type account A list for account information. Container container The list of containers. Container name The name of a container. String bytes The size of the container. Integer 3.5.5. Swift list a container's objects To list the objects within a container, make a GET request with the with the API version, account, and the name of the container. You can specify query parameters to filter the full list, or leave out the parameters to return a list of the first 10,000 object names stored in the container. Syntax Table 3.5. Parameters Name Description Type Valid Values Required format Defines the format of the result. String json or xml No prefix Limits the result set to objects beginning with the specified prefix. String N/A No marker Returns a list of results greater than the marker value. String N/A No limit Limits the number of results to the specified value. Integer 0 - 10,000 No delimiter The delimiter between the prefix and the rest of the object name. String N/A No path The pseudo-hierarchical path of the objects. String N/A No Table 3.6. Response Entities Name Description Type container The container. Container object An object within the container. Container name The name of an object within the container. String hash A hash code of the object's contents. String last_modified The last time the object's contents were modified. Date content_type The type of content within the object. String 3.5.6. Swift create a container To create a new container, make a PUT request with the API version, account, and the name of the new container. The container name must be unique, must not contain a forward-slash (/) character, and should be less than 256 bytes. You can include access control headers and metadata headers in the request. You can also include a storage policy identifying a key for a set of placement pools. For example, execute radosgw-admin zone get to see a list of available keys under placement_pools . A storage policy enables you to specify a special set of pools for the container, for example, SSD-based storage. The operation is idempotent. If you make a request to create a container that already exists, it will return with a HTTP 202 return code, but will not create another container. Syntax Table 3.7. Headers Name Description Type Required X-Container-Read The user IDs with read permissions for the container. Comma-separated string values of user IDs. No X-Container-Write The user IDs with write permissions for the container. Comma-separated string values of user IDs. No X-Container-Meta- KEY A user-defined meta data key that takes an arbitrary string value. String No X-Storage-Policy The key that identifies the storage policy under placement_pools for the Ceph Object Gateway. Execute radosgw-admin zone get for available keys. String No If a container with the same name already exists, and the user is the container owner then the operation will succeed. Otherwise the operation will fail. Table 3.8. HTTP Response Name Description Status Code 409 The container already exists under a different user's ownership. BucketAlreadyExists 3.5.7. Swift delete a container To delete a container, make a DELETE request with the API version, account, and the name of the container. The container must be empty. If you'd like to check if the container is empty, execute a HEAD request against the container. Once you've successfully removed the container, you'll be able to reuse the container name. Syntax Table 3.9. HTTP Response Name Description Status Code 204 The container was removed. NoContent 3.5.8. Swift add or update the container metadata To add metadata to a container, make a POST request with the API version, account, and container name. You must have write permissions on the container to add or update metadata. Syntax Table 3.10. Request Headers Name Description Type Required X-Container-Meta- KEY A user-defined meta data key that takes an arbitrary string value. String No 3.6. Swift object operations As a developer, you can perform object operations with the Swift application programing interface (API) through the Ceph Object Gateway. You can list, create, update, and delete objects. You can also add or update the object's metadata. 3.6.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.6.2. Swift object operations An object is a container for storing data and metadata. A container might have many objects, but the object names must be unique. This API enables a client to create an object, set access controls and metadata, retrieve an object's data and metadata, and delete an object. Since this API makes requests related to information in a particular user's account, all requests in this API must be authenticated. Unless the container or object's access control is deliberately made publicly accessible, that is, allows anonymous requests. 3.6.3. Swift get an object To retrieve an object, make a GET request with the API version, account, container and object name. You must have read permissions on the container to retrieve an object within it. Syntax Table 3.11. Request Headers Name Description Type Required range To retrieve a subset of an object's contents, you can specify a byte range. Date No If-Modified-Since Only copies if modified since the date/time of the source object's last_modified attribute. Date No If-Unmodified-Since Only copies if not modified since the date/time of the source object's last_modified attribute. Date No Copy-If-Match Copies only if the ETag in the request matches the source object's ETag. ETag. No Copy-If-None-Match Copies only if the ETag in the request does not match the source object's ETag. ETag. No Table 3.12. Response Headers Name Description Content-Range The range of the subset of object contents. Returned only if the range header field was specified in the request. 3.6.4. Swift create or update an object To create a new object, make a PUT request with the API version, account, container name and the name of the new object. You must have write permission on the container to create or update an object. The object name must be unique within the container. The PUT request is not idempotent, so if you do not use a unique name, the request will update the object. However, you can use pseudo-hierarchical syntax in the object name to distinguish it from another object of the same name if it is under a different pseudo-hierarchical directory. You can include access control headers and metadata headers in the request. Syntax Table 3.13. Request Headers Name Description Type Required Valid Values ETag An MD5 hash of the object's contents. Recommended. String No N/A Content-Type The type of content the object contains. String No N/A Transfer-Encoding Indicates whether the object is part of a larger aggregate object. String No chunked 3.6.5. Swift delete an object To delete an object, make a DELETE request with the API version, account, container and object name. You must have write permissions on the container to delete an object within it. Once you've successfully deleted the object, you will be able to reuse the object name. Syntax 3.6.6. Swift copy an object Copying an object allows you to make a server-side copy of an object, so that you do not have to download it and upload it under another container. To copy the contents of one object to another object, you can make either a PUT request or a COPY request with the API version, account, and the container name. For a PUT request, use the destination container and object name in the request, and the source container and object in the request header. For a Copy request, use the source container and object in the request, and the destination container and object in the request header. You must have write permission on the container to copy an object. The destination object name must be unique within the container. The request is not idempotent, so if you do not use a unique name, the request will update the destination object. You can use pseudo-hierarchical syntax in the object name to distinguish the destination object from the source object of the same name if it is under a different pseudo-hierarchical directory. You can include access control headers and metadata headers in the request. Syntax or alternatively: Syntax Table 3.14. Request Headers Name Description Type Required X-Copy-From Used with a PUT request to define the source container/object path. String Yes, if using PUT Destination Used with a COPY request to define the destination container/object path. String Yes, if using COPY If-Modified-Since Only copies if modified since the date/time of the source object's last_modified attribute. Date No If-Unmodified-Since Only copies if not modified since the date/time of the source object's last_modified attribute. Date No Copy-If-Match Copies only if the ETag in the request matches the source object's ETag. ETag. No Copy-If-None-Match Copies only if the ETag in the request does not match the source object's ETag. ETag. No 3.6.7. Swift get object metadata To retrieve an object's metadata, make a HEAD request with the API version, account, container and object name. You must have read permissions on the container to retrieve metadata from an object within the container. This request returns the same header information as the request for the object itself, but it does not return the object's data. Syntax 3.6.8. Swift add or update object metadata To add metadata to an object, make a POST request with the API version, account, container and object name. You must have write permissions on the parent container to add or update metadata. Syntax Table 3.15. Request Headers Name Description Type Required X-Object-Meta- KEY A user-defined meta data key that takes an arbitrary string value. String No 3.7. Swift temporary URL operations To allow temporary access, temp url functionality is supported by swift endpoint of radosgw . For example GET requests, to objects without the need to share credentials. For this functionality, initially the value of X-Account-Meta-Temp-URL-Key and optionally X-Account-Meta-Temp-URL-Key-2 should be set. The Temp URL functionality relies on a HMAC-SHA1 signature against these secret keys. 3.7.1. Swift get temporary URL objects Temporary URL uses a cryptographic HMAC-SHA1 signature, which includes the following elements: The value of the Request method, "GET" for instance The expiry time, in format of seconds since the epoch, that is, Unix time The request path starting from "v1" onwards The above items are normalized with newlines appended between them, and a HMAC is generated using the SHA-1 hashing algorithm against one of the Temp URL Keys posted earlier. A sample python script to demonstrate the above is given below: Example Example Output 3.7.2. Swift POST temporary URL keys A POST request to the swift account with the required Key will set the secret temp URL key for the account against which temporary URL access can be provided to accounts. Up to two keys are supported, and signatures are checked against both the keys, if present, so that keys can be rotated without invalidating the temporary URLs. Syntax Table 3.16. Request Headers Name Description Type Required X-Account-Meta-Temp-URL-Key A user-defined key that takes an arbitrary string value. String Yes X-Account-Meta-Temp-URL-Key-2 A user-defined key that takes an arbitrary string value. String No 3.8. Swift multi-tenancy container operations When a client application accesses containers, it always operates with credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every container operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi tenancy is completely backward compatible with releases, as long as the referred containers and referring user belong to the same tenant. Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used. A colon character separates tenant and container, thus a sample URL would be: Example By contrast, in a create_container() method, simply separate the tenant and container in the container method itself: Example 3.9. Additional Resources See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on multi-tenancy. See Appendix D for Swift request headers. See Appendix E for Swift response headers. | [
"radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full",
"radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret",
"radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"GET /auth HTTP/1.1 Host: swift.example.com X-Auth-User: johndoe X-Auth-Key: R7UUOLFDI2ZI9PRCQ53K",
"HTTP/1.1 204 No Content Date: Mon, 16 Jul 2012 11:05:33 GMT Server: swift X-Storage-Url: https://swift.example.com X-Storage-Token: UOlCCC8TahFKlWuv9DB09TWHF0nDjpPElha0kAa Content-Length: 0 Content-Type: text/plain; charset=UTF-8",
"POST / AP_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Read: * X-Container-Write: UID1 , UID2 , UID3",
"GET / API_VERSION / ACCOUNT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"GET / AP_VERSION / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"PUT / AP_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Read: COMMA_SEPARATED_UIDS X-Container-Write: COMMA_SEPARATED_UIDS X-Container-Meta- KEY : VALUE X-Storage-Policy: PLACEMENT_POOLS_KEY",
"DELETE / AP_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"POST / AP_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Meta-Color: red X-Container-Meta-Taste: salty",
"GET / AP_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"PUT / AP_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"DELETE / API_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"PUT / AP_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 X-Copy-From: TENANT : SOURCE_CONTAINER / SOURCE_OBJECT Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"COPY / AP_VERSION / ACCOUNT / TENANT : SOURCE_CONTAINER / SOURCE_OBJECT HTTP/1.1 Destination: TENANT : DEST_CONTAINER / DEST_OBJECT",
"HEAD / AP_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"POST / AP_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"import hmac from hashlib import sha1 from time import time method = 'GET' host = 'https://objectstore.example.com' duration_in_seconds = 300 # Duration for which the url is valid expires = int(time() + duration_in_seconds) path = '/v1/your-bucket/your-object' key = 'secret' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) hmac_body = hmac.new(key, hmac_body, sha1).hexdigest() sig = hmac.new(key, hmac_body, sha1).hexdigest() rest_uri = \"{host}{path}?temp_url_sig={sig}&temp_url_expires={expires}\".format( host=host, path=path, sig=sig, expires=expires) print rest_uri",
"https://objectstore.example.com/v1/your-bucket/your-object?temp_url_sig=ff4657876227fc6025f04fcf1e82818266d022c6&temp_url_expires=1423200992",
"POST / API_VERSION / ACCOUNT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"https://rgw.domain.com/tenant:container",
"create_container(\"tenant:container\")"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/ceph-object-gateway-and-the-swift-api |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/operational_measurements/proc_providing-feedback-on-red-hat-documentation |
Chapter 1. Getting started with Security by using Basic authentication and Jakarta Persistence | Chapter 1. Getting started with Security by using Basic authentication and Jakarta Persistence Get started with Quarkus Security by securing your Quarkus application endpoints with the built-in Quarkus Basic authentication and the Jakarta Persistence identity provider, enabling role-based access control. The Jakarta Persistence IdentityProvider verifies and converts a Basic authentication user name and password pair to a SecurityIdentity instance, which is used to authorize access requests, making your Quarkus application secure. For more information about Jakarta Persistence, see the Quarkus Security with Jakarta Persistence guide. This tutorial prepares you to implement more advanced security mechanisms in Quarkus, for example, how to use the OpenID Connect (OIDC) authentication mechanism. 1.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) 1.2. Building your application This tutorial gives detailed steps for creating an application with endpoints that illustrate various authorization policies: Endpoint Description /api/public Accessible without authentication, this endpoint allows anonymous access. /api/admin Secured with role-based access control (RBAC), this endpoint is accessible only to users with the admin role. Access is controlled declaratively by using the @RolesAllowed annotation. /api/users/me Also secured by RBAC, this endpoint is accessible only to users with the user role. It returns the caller's username as a string. Tip To examine the completed example, download the archive or clone the Git repository: git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.15 You can find the solution in the security-jpa-quickstart directory . 1.3. Create and verify the Maven project For Quarkus Security to be able to map your security source to Jakarta Persistence entities, ensure that the Maven project in this tutorial includes the quarkus-security-jpa extension. Note Hibernate ORM with Panache is used to store your user identities, but you can also use Hibernate ORM with the quarkus-security-jpa extension. You must also add your preferred database connector library. The instructions in this example tutorial use a PostgreSQL database for the identity store. 1.3.1. Create the Maven project You can create a new Maven project with the Security Jakarta Persistence extension or add the extension to an existing Maven project. You can use either Hibernate ORM or Hibernate Reactive. 1.3.1.1. Creating new Maven project To create a new Maven project with the Jakarta Persistence extension, complete one of the following steps: To create the Maven project with Hibernate ORM, use the following command: Using the Quarkus CLI: quarkus create app org.acme:security-jpa-quickstart \ --extension='security-jpa,jdbc-postgresql,rest,hibernate-orm-panache' \ --no-code cd security-jpa-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-jpa-quickstart \ -Dextensions='security-jpa,jdbc-postgresql,rest,hibernate-orm-panache' \ -DnoCode cd security-jpa-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-jpa-quickstart" 1.3.1.2. Adding Jakarta Persistence extension to existing project To add the Jakarta Persistence extension to an existing Maven project, complete one of the following steps: To add the Security Jakarta Persistence extension to an existing Maven project with Hibernate ORM, run the following command from your project base directory: Using the Quarkus CLI: quarkus extension add security-jpa Using Maven: ./mvnw quarkus:add-extension -Dextensions='security-jpa' Using Gradle: ./gradlew addExtension --extensions='security-jpa' 1.3.2. Verify the quarkus-security-jpa dependency After you have run either of the preceding commands to create the Maven project, verify that the quarkus-security-jpa dependency was added to your project build XML file. To verify the quarkus-security-jpa extension, check for the following configuration: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-security-jpa</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-security-jpa") 1.4. Write the application Secure the API endpoint to determine who can access the application by using one of the following approaches: Implement the /api/public endpoint to allow all users access to the application. Add a regular Jakarta REST resource to your Java source code, as shown in the following code snippet: package org.acme.security.jpa; import jakarta.annotation.security.PermitAll; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/api/public") public class PublicResource { @GET @PermitAll @Produces(MediaType.TEXT_PLAIN) public String publicResource() { return "public"; } } Implement an /api/admin endpoint that can only be accessed by users who have the admin role. The source code for the /api/admin endpoint is similar, but instead, you use a @RolesAllowed annotation to ensure that only users granted the admin role can access the endpoint. Add a Jakarta REST resource with the following @RolesAllowed annotation: package org.acme.security.jpa; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/api/admin") public class AdminResource { @GET @RolesAllowed("admin") @Produces(MediaType.TEXT_PLAIN) public String adminResource() { return "admin"; } } Implement an /api/users/me endpoint that can only be accessed by users who have the user role. Use SecurityContext to get access to the currently authenticated Principal user and to return their username, all of which is retrieved from the database. package org.acme.security.jpa; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.SecurityContext; @Path("/api/users") public class UserResource { @GET @RolesAllowed("user") @Path("/me") public String me(@Context SecurityContext securityContext) { return securityContext.getUserPrincipal().getName(); } } 1.5. Define the user entity You can now describe how you want security information to be stored in the model by adding annotations to the user entity, as outlined in the following code snippet: package org.acme.security.jpa; import jakarta.persistence.Entity; import jakarta.persistence.Table; import io.quarkus.hibernate.orm.panache.PanacheEntity; import io.quarkus.elytron.security.common.BcryptUtil; import io.quarkus.security.jpa.Password; import io.quarkus.security.jpa.Roles; import io.quarkus.security.jpa.UserDefinition; import io.quarkus.security.jpa.Username; @Entity @Table(name = "test_user") @UserDefinition 1 public class User extends PanacheEntity { @Username 2 public String username; @Password 3 public String password; @Roles 4 public String role; /** * Adds a new user to the database * @param username the username * @param password the unencrypted password (it is encrypted with bcrypt) * @param role the comma-separated roles */ public static void add(String username, String password, String role) { 5 User user = new User(); user.username = username; user.password = BcryptUtil.bcryptHash(password); user.role = role; user.persist(); } } The quarkus-security-jpa extension only initializes if a single entity is annotated with @UserDefinition . 1 The @UserDefinition annotation must be present on a single entity, either a regular Hibernate ORM entity or a Hibernate ORM with Panache entity. 2 Indicates the field used for the username. 3 Indicates the field used for the password. By default, it uses bcrypt-hashed passwords. You can configure it to use plain text or custom passwords. 4 Indicates the comma-separated list of roles added to the target principal representation attributes. 5 Allows us to add users while hashing passwords with the proper bcrypt hash. Note Don't forget to set up the Panache and PostgreSQL JDBC driver, please see Setting up and configuring Hibernate ORM with Panache for more information. 1.6. Configure the application Enable the built-in Quarkus Basic authentication mechanism by setting the quarkus.http.auth.basic property to true : quarkus.http.auth.basic=true Note When secure access is required, and no other authentication mechanisms are enabled, the built-in Basic authentication of Quarkus is the fallback authentication mechanism. Therefore, in this tutorial, you do not need to set the property quarkus.http.auth.basic to true . Configure at least one data source in the application.properties file so the quarkus-security-jpa extension can access your database. For example: quarkus.http.auth.basic=true quarkus.datasource.db-kind=postgresql quarkus.datasource.username=quarkus quarkus.datasource.password=quarkus quarkus.datasource.jdbc.url=jdbc:postgresql:security_jpa quarkus.hibernate-orm.database.generation=drop-and-create To initialize the database with users and roles, implement the Startup class, as outlined in the following code snippet: package org.acme.security.jpa; import jakarta.enterprise.event.Observes; import jakarta.inject.Singleton; import jakarta.transaction.Transactional; import io.quarkus.runtime.StartupEvent; @Singleton public class Startup { @Transactional public void loadUsers(@Observes StartupEvent evt) { // reset and load all test users User.deleteAll(); User.add("admin", "admin", "admin"); User.add("user", "user", "user"); } } The preceding example demonstrates how the application can be protected and identities provided by the specified database. Important In a production environment, do not store plain text passwords. As a result, the quarkus-security-jpa defaults to using bcrypt-hashed passwords. 1.7. Test your application by using Dev Services for PostgreSQL Complete the integration testing of your application in JVM and native modes by using Dev Services for PostgreSQL before you run your application in production mode. Start by adding the following dependencies to your test project: Using Maven: <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.rest-assured:rest-assured") To run your application in dev mode: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev The following properties configuration demonstrates how to enable PostgreSQL testing to run only in production ( prod ) mode. In this scenario, Dev Services for PostgreSQL launches and configures a PostgreSQL test container. %prod.quarkus.datasource.db-kind=postgresql %prod.quarkus.datasource.username=quarkus %prod.quarkus.datasource.password=quarkus %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://localhost/quarkus quarkus.hibernate-orm.database.generation=drop-and-create If you add the %prod. profile prefix, data source properties are not visible to Dev Services for PostgreSQL and are only observed by an application running in production mode. To write the integration test, use the following code sample: package org.acme.security.jpa; import static io.restassured.RestAssured.get; import static io.restassured.RestAssured.given; import static org.hamcrest.core.Is.is; import org.apache.http.HttpStatus; import org.junit.jupiter.api.Test; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class JpaSecurityRealmTest { @Test void shouldAccessPublicWhenAnonymous() { get("/api/public") .then() .statusCode(HttpStatus.SC_OK); } @Test void shouldNotAccessAdminWhenAnonymous() { get("/api/admin") .then() .statusCode(HttpStatus.SC_UNAUTHORIZED); } @Test void shouldAccessAdminWhenAdminAuthenticated() { given() .auth().preemptive().basic("admin", "admin") .when() .get("/api/admin") .then() .statusCode(HttpStatus.SC_OK); } @Test void shouldNotAccessUserWhenAdminAuthenticated() { given() .auth().preemptive().basic("admin", "admin") .when() .get("/api/users/me") .then() .statusCode(HttpStatus.SC_FORBIDDEN); } @Test void shouldAccessUserAndGetIdentityWhenUserAuthenticated() { given() .auth().preemptive().basic("user", "user") .when() .get("/api/users/me") .then() .statusCode(HttpStatus.SC_OK) .body(is("user")); } } As you can see in this code sample, you do not need to start the test container from the test code. Note When you start your application in dev mode, Dev Services for PostgreSQL launches a PostgreSQL dev mode container so that you can start developing your application. While developing your application, you can add and run tests individually by using the Continuous Testing feature. Dev Services for PostgreSQL supports testing while you develop by providing a separate PostgreSQL test container that does not conflict with the dev mode container. 1.8. Test your application using Curl or browser To test your application using Curl or the browser, you must first start a PostgreSQL server, then compile and run your application either in JVM or native mode. 1.8.1. Start the PostgreSQL server docker run --rm=true --name security-getting-started -e POSTGRES_USER=quarkus \ -e POSTGRES_PASSWORD=quarkus -e POSTGRES_DB=quarkus \ -p 5432:5432 postgres:14.1 1.8.2. Compile and run the application Compile and run your Quarkus application by using one of the following methods: JVM mode Compile the application: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Run the application: java -jar target/quarkus-app/quarkus-run.jar Native mode Compile the application: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.native.enabled=true Run the application: ./target/security-jpa-quickstart-1.0.0-SNAPSHOT-runner 1.8.3. Access and test the application security with Curl When your application is running, you can access its endpoints by using one of the following Curl commands. Connect to a protected endpoint anonymously: USD curl -i -X GET http://localhost:8080/api/public HTTP/1.1 200 OK Content-Length: 6 Content-Type: text/plain;charset=UTF-8 public Connect to a protected endpoint anonymously: USD curl -i -X GET http://localhost:8080/api/admin HTTP/1.1 401 Unauthorized Content-Length: 14 Content-Type: text/html;charset=UTF-8 WWW-Authenticate: Basic Not authorized Connect to a protected endpoint as an authorized user: USD curl -i -X GET -u admin:admin http://localhost:8080/api/admin HTTP/1.1 200 OK Content-Length: 5 Content-Type: text/plain;charset=UTF-8 admin You can also access the same endpoint URLs by using a browser. 1.8.4. Access and test the application security with the browser If you use a browser to connect to a protected resource anonymously, a Basic authentication form displays, prompting you to enter credentials. 1.8.5. Results When you provide the credentials of an authorized user, for example, admin:admin , the Jakarta Persistence security extension authenticates and loads the user's roles. The admin user is authorized to access the protected resources. If a resource is protected with @RolesAllowed("user") , the user admin is not authorized to access the resource because it is not assigned to the "user" role, as shown in the following example: USD curl -i -X GET -u admin:admin http://localhost:8080/api/users/me HTTP/1.1 403 Forbidden Content-Length: 34 Content-Type: text/html;charset=UTF-8 Forbidden Finally, the user named user is authorized, and the security context contains the principal details, for example, the username. USD curl -i -X GET -u user:user http://localhost:8080/api/users/me HTTP/1.1 200 OK Content-Length: 4 Content-Type: text/plain;charset=UTF-8 user 1.9. What's You have successfully learned how to create and test a secure Quarkus application. This was achieved by integrating the built-in Basic authentication in Quarkus with the Jakarta Persistence identity provider. After completing this tutorial, you can explore more advanced security mechanisms in Quarkus. The following information shows you how to use OpenID Connect for secure single sign-on access to your Quarkus endpoints: OIDC Bearer token authentication OIDC code flow mechanism for protecting web applications 1.10. References Quarkus Security overview Quarkus Security architecture Other supported authentication mechanisms Identity providers OIDC Bearer token authentication OIDC code flow mechanism for protecting web applications Simplified Hibernate ORM with Panache Using Hibernate ORM and Jakarta Persistence | [
"git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.15",
"quarkus create app org.acme:security-jpa-quickstart --extension='security-jpa,jdbc-postgresql,rest,hibernate-orm-panache' --no-code cd security-jpa-quickstart",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create -DprojectGroupId=org.acme -DprojectArtifactId=security-jpa-quickstart -Dextensions='security-jpa,jdbc-postgresql,rest,hibernate-orm-panache' -DnoCode cd security-jpa-quickstart",
"quarkus extension add security-jpa",
"./mvnw quarkus:add-extension -Dextensions='security-jpa'",
"./gradlew addExtension --extensions='security-jpa'",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-security-jpa</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-security-jpa\")",
"package org.acme.security.jpa; import jakarta.annotation.security.PermitAll; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/api/public\") public class PublicResource { @GET @PermitAll @Produces(MediaType.TEXT_PLAIN) public String publicResource() { return \"public\"; } }",
"package org.acme.security.jpa; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/api/admin\") public class AdminResource { @GET @RolesAllowed(\"admin\") @Produces(MediaType.TEXT_PLAIN) public String adminResource() { return \"admin\"; } }",
"package org.acme.security.jpa; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.SecurityContext; @Path(\"/api/users\") public class UserResource { @GET @RolesAllowed(\"user\") @Path(\"/me\") public String me(@Context SecurityContext securityContext) { return securityContext.getUserPrincipal().getName(); } }",
"package org.acme.security.jpa; import jakarta.persistence.Entity; import jakarta.persistence.Table; import io.quarkus.hibernate.orm.panache.PanacheEntity; import io.quarkus.elytron.security.common.BcryptUtil; import io.quarkus.security.jpa.Password; import io.quarkus.security.jpa.Roles; import io.quarkus.security.jpa.UserDefinition; import io.quarkus.security.jpa.Username; @Entity @Table(name = \"test_user\") @UserDefinition 1 public class User extends PanacheEntity { @Username 2 public String username; @Password 3 public String password; @Roles 4 public String role; /** * Adds a new user to the database * @param username the username * @param password the unencrypted password (it is encrypted with bcrypt) * @param role the comma-separated roles */ public static void add(String username, String password, String role) { 5 User user = new User(); user.username = username; user.password = BcryptUtil.bcryptHash(password); user.role = role; user.persist(); } }",
"quarkus.http.auth.basic=true quarkus.datasource.db-kind=postgresql quarkus.datasource.username=quarkus quarkus.datasource.password=quarkus quarkus.datasource.jdbc.url=jdbc:postgresql:security_jpa quarkus.hibernate-orm.database.generation=drop-and-create",
"package org.acme.security.jpa; import jakarta.enterprise.event.Observes; import jakarta.inject.Singleton; import jakarta.transaction.Transactional; import io.quarkus.runtime.StartupEvent; @Singleton public class Startup { @Transactional public void loadUsers(@Observes StartupEvent evt) { // reset and load all test users User.deleteAll(); User.add(\"admin\", \"admin\", \"admin\"); User.add(\"user\", \"user\", \"user\"); } }",
"<dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency>",
"testImplementation(\"io.rest-assured:rest-assured\")",
"quarkus dev",
"./mvnw quarkus:dev",
"./gradlew --console=plain quarkusDev",
"%prod.quarkus.datasource.db-kind=postgresql %prod.quarkus.datasource.username=quarkus %prod.quarkus.datasource.password=quarkus %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://localhost/quarkus quarkus.hibernate-orm.database.generation=drop-and-create",
"package org.acme.security.jpa; import static io.restassured.RestAssured.get; import static io.restassured.RestAssured.given; import static org.hamcrest.core.Is.is; import org.apache.http.HttpStatus; import org.junit.jupiter.api.Test; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class JpaSecurityRealmTest { @Test void shouldAccessPublicWhenAnonymous() { get(\"/api/public\") .then() .statusCode(HttpStatus.SC_OK); } @Test void shouldNotAccessAdminWhenAnonymous() { get(\"/api/admin\") .then() .statusCode(HttpStatus.SC_UNAUTHORIZED); } @Test void shouldAccessAdminWhenAdminAuthenticated() { given() .auth().preemptive().basic(\"admin\", \"admin\") .when() .get(\"/api/admin\") .then() .statusCode(HttpStatus.SC_OK); } @Test void shouldNotAccessUserWhenAdminAuthenticated() { given() .auth().preemptive().basic(\"admin\", \"admin\") .when() .get(\"/api/users/me\") .then() .statusCode(HttpStatus.SC_FORBIDDEN); } @Test void shouldAccessUserAndGetIdentityWhenUserAuthenticated() { given() .auth().preemptive().basic(\"user\", \"user\") .when() .get(\"/api/users/me\") .then() .statusCode(HttpStatus.SC_OK) .body(is(\"user\")); } }",
"docker run --rm=true --name security-getting-started -e POSTGRES_USER=quarkus -e POSTGRES_PASSWORD=quarkus -e POSTGRES_DB=quarkus -p 5432:5432 postgres:14.1",
"quarkus build",
"./mvnw install",
"./gradlew build",
"java -jar target/quarkus-app/quarkus-run.jar",
"quarkus build --native",
"./mvnw install -Dnative",
"./gradlew build -Dquarkus.native.enabled=true",
"./target/security-jpa-quickstart-1.0.0-SNAPSHOT-runner",
"curl -i -X GET http://localhost:8080/api/public HTTP/1.1 200 OK Content-Length: 6 Content-Type: text/plain;charset=UTF-8 public",
"curl -i -X GET http://localhost:8080/api/admin HTTP/1.1 401 Unauthorized Content-Length: 14 Content-Type: text/html;charset=UTF-8 WWW-Authenticate: Basic Not authorized",
"curl -i -X GET -u admin:admin http://localhost:8080/api/admin HTTP/1.1 200 OK Content-Length: 5 Content-Type: text/plain;charset=UTF-8 admin",
"curl -i -X GET -u admin:admin http://localhost:8080/api/users/me HTTP/1.1 403 Forbidden Content-Length: 34 Content-Type: text/html;charset=UTF-8 Forbidden",
"curl -i -X GET -u user:user http://localhost:8080/api/users/me HTTP/1.1 200 OK Content-Length: 4 Content-Type: text/plain;charset=UTF-8 user"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/getting_started_with_security/security-getting-started-tutorial |
API overview | API overview OpenShift Container Platform 4.18 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team | [
"oc debug node/<node>",
"chroot /host",
"systemctl cat kubelet",
"/etc/systemd/system/kubelet.service.d/20-logging.conf [Service] Environment=\"KUBELET_LOG_LEVEL=2\"",
"echo -e \"[Service]\\nEnvironment=\\\"KUBELET_LOG_LEVEL=8\\\"\" > /etc/systemd/system/kubelet.service.d/30-logging.conf",
"systemctl daemon-reload",
"systemctl restart kubelet",
"rm -f /etc/systemd/system/kubelet.service.d/30-logging.conf",
"systemctl daemon-reload",
"systemctl restart kubelet",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-master-kubelet-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - name: kubelet.service enabled: true dropins: - name: 30-logging.conf contents: | [Service] Environment=\"KUBELET_LOG_LEVEL=2\"",
"oc adm node-logs --role master -u kubelet",
"oc adm node-logs --role worker -u kubelet",
"journalctl -b -f -u kubelet.service",
"sudo tail -f /var/log/containers/*",
"- for n in USD(oc get node --no-headers | awk '{print USD1}'); do oc adm node-logs USDn | gzip > USDn.log.gz; done"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/api_overview/index |
8.9. Metadata Procedures | 8.9. Metadata Procedures SYSADMIN.setTableStats Set statistics for the given table. SYSADMIN.setTableStats(TableName in string, Cardinality in integer) SYSADMIN.setColumnStats Set statistics for the given column. SYSADMIN.setColumnStats(TableName in string, ColumnName in string, DistinctCount in integer, NullCount in integer, Max in string, Min in string) All stat values are nullable. Passing a null stat value will leave corresponding metadata value unchanged. SYSADMIN.setProperty Set an extension metadata property for the given record. Extension metadata is typically used by translators. SYSADMIN.setProperty(OldValue return clob, Uid in string, Name in string, Value in clob) Setting a value to null will remove the property. The use of this procedure will not trigger replanning of associated prepared plans. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/metadata_procedures |
Release Notes for AMQ Streams 2.5 on RHEL | Release Notes for AMQ Streams 2.5 on RHEL Red Hat Streams for Apache Kafka 2.5 Highlights of what's new and what's changed with this release of AMQ Streams on Red Hat Enterprise Linux | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_rhel/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_microsoft_azure/making-open-source-more-inclusive |
20.27. Host Machine Management | 20.27. Host Machine Management This section contains the commands needed for managing the host system (referred to as a node by the commands). 20.27.1. Displaying Host Information The virsh nodeinfo command displays basic information about the host, including the model number, number of CPUs, type of CPU, and size of the physical memory. The output corresponds to the virNodeInfo structure. Specifically, the "CPU socket(s)" field indicates the number of CPU sockets per NUMA cell. Example 20.54. How to display information about your host machine The following example retrieves information about your host: 20.27.2. Setting NUMA Parameters The virsh numatune command can either set or retrieve the NUMA parameters for a specified guest virtual machine. Within the guest virtual machine's configuration XML file these parameters are nested within the <numatune> element. Without using flags, only the current settings are displayed. The numatune domain command requires a specified guest virtual machine name and can take the following arguments: --mode - The mode can be set to either strict , interleave , or preferred . Running domains cannot have their mode changed while live unless the guest virtual machine was started within strict mode. --nodeset contains a list of NUMA nodes that are used by the host physical machine for running the guest virtual machine. The list contains nodes, each separated by a comma, with a dash - used for node ranges and a caret ^ used for excluding a node. Only one of the three following flags can be used per instance --config will effect the boot of a persistent guest virtual machine --live will set the scheduler information of a running guest virtual machine. --current will effect the current state of the guest virtual machine. Example 20.55. How to set the NUMA parameters for the guest virtual machine The following example sets the NUMA mode to strict for nodes 0, 2, and 3 for the running guest1 virtual machine: # virsh numatune guest1 --mode strict --nodeset 0,2-3 --live Running this command will change the running configuration for guest1 to the following configuration in its XML file. 20.27.3. Displaying the Amount of Free Memory in a NUMA Cell The virsh freecell command displays the available amount of memory on the machine within a specified NUMA cell. This command can provide one of three different displays of available memory on the machine depending on the options specified. specified cell. Example 20.56. How to display memory properties for virtual machines and NUMA cells The following command displays the total amount of available memory in all cells: To display also the amount of available memory in individual cells, use the --all option: To display the amount of individual memory in a specific cell, use the --cellno option: 20.27.4. Displaying a CPU List The virsh nodecpumap command displays the number of CPUs that are available to the host machine, and it also lists how many are currently online. Example 20.57. How to display number of CPUs that available to the host The following example displays the number of CPUs available to the host: 20.27.5. Displaying CPU Statistics The virsh nodecpustats [ cpu_number ] [--percent] command displays statistical information about the CPUs load status of the host. If a CPU is specified, the statistics are only for the specified CPU. If the percent option is specified, the command displays the percentage of each type of CPU statistics that were recorded over an one (1) second interval. Example 20.58. How to display statistical information about CPU usage The following example returns general statistics about the host CPUs load: This example displays the statistics for CPU number 2 as percentages: 20.27.6. Managing Devices 20.27.6.1. Attaching and updating a device with virsh For information on attaching storage devices, see Section 13.3.6, "Adding Storage Devices to Guests" . Procedure 20.4. Hot plugging USB devices for use by the guest virtual machine USB devices can be either attached to the virtual machine that is running by hot plugging, or while the guest is shut off. The device you want to use in the guest must be attached to the host machine. Locate the USB device you want to attach by running the following command: Create an XML file and give it a logical name ( usb_device.xml , for example). Copy the vendor and product ID number (a hexidecimal number) exactly as was displayed in your search. Add this information to the XML file as shown in Figure 20.2, "USB devices XML snippet" . Remember the name of this file as you will need it in the step. <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x17ef'/> <product id='0x480f'/> </source> </hostdev> Figure 20.2. USB devices XML snippet Attach the device by running the following command. When you run the command, replace guest1 with the name of your virtual machine and usb_device.xml with the name of your XML file that contains the vendor and product ID of your device, which you created in the step. For the change take effect at the reboot, use the --config argument. For the change to take effect on the current guest virtual machine, use the --current argument. See the virsh man page for additional arguments. # virsh attach-device guest1 --file usb_device.xml --config Example 20.59. How to hot unplug devices from a guest virtual machine The following example detaches the USB device configured with the usb_device1.xml file from the guest1 virtual machine: # virsh detach-device guest1 --file usb_device.xml 20.27.6.2. Attaching interface devices The virsh attach-interface domain type source [<target>] [<mac>] [<script>] [<model>] [<inbound>] [<outbound>] [--config] [--live] [--current] command can take the following arguments: --type - allows you to set the interface type --source - allows you to set the source of the network interface --live - gets its value from running guest virtual machine configuration settings --config - takes effect at boot --current - gets its value according to the current configuration settings --target - indicates the target device in the guest virtual machine. --mac - use this option to specify the MAC address of the network interface --script - use this option to specify a path to a script file handling a bridge instead of the default one. --model - use this option to specify the model type. --inbound - controls the inbound bandwidth of the interface. Acceptable values are average , peak , and burst . --outbound - controls the outbound bandwidth of the interface. Acceptable values are average , peak , and burst . Note Values for average and peak are expressed in kilobytes per second, while burst is expressed in kilobytes in a single burst at peak speed as described in the Network XML upstream documentation . The type can be either network to indicate a physical network device, or bridge to indicate a bridge to a device. source is the source of the device. To remove the attached device, use the virsh detach-device command. Example 20.60. How to attach a device to the guest virtual machine The following example attaches the networkw network device to the guest1 virtual machine. The interface model is going to be presented to the guest as virtio : # virsh attach-interface guest1 networkw --model virtio 20.27.6.3. Changing the media of a CDROM The virsh change-media command changes the media of a CDROM to another source or format. The command takes the following arguments. More examples and explanation for these arguments can also be found in the man page. --path - A string containing a fully-qualified path or target of disk device --source - A string containing the source of the media --eject - Ejects the media --insert - Inserts the media --update - Updates the media --current - Can be either or both of --live and --config , which depends on implementation of hypervisor driver --live - Alters the live configuration of running guest virtual machine --config - Alters the persistent configuration, effect observed on boot --force - Forces media to change 20.27.7. Setting and Displaying the Node Memory Parameters The virsh node-memory-tune [shm-pages-to-scan] [shm-sleep-milisecs] [shm-merge-across-nodes] command displays and allows you to set the node memory parameters. The following parameters may be set with this command: --shm-pages-to-scan - sets the number of pages to scan before the kernel samepage merging (KSM) service goes to sleep. --shm-sleep-milisecs - sets the number of miliseconds that KSM will sleep before the scan --shm-merge-across-nodes - specifies if pages from different NUMA nodes can be merged Example 20.61. How to merge memory pages across NUMA nodes The following example merges all of the memory pages from all of the NUMA nodes: # virsh node-memory-tune --shm-merge-across-nodes 1 20.27.8. Listing Devices on a Host The virsh nodedev-list --cap --tree command lists all the devices available on the host that are known to the libvirt service. --cap is used to filter the list by capability types, each separated by a comma, and cannot be used with --tree . Using the argument --tree , puts the output into a tree structure. Example 20.62. How to display the devices available on a host The following example lists devices that are available on a host in a tree format. Note that the list has been truncated: This example lists SCSI devices available on a host: 20.27.9. Creating Devices on Host Machines The virsh nodedev-create file command allows you to create a device on a host physical machine and then assign it to a guest virtual machine. Although libvirt automatically detects which host nodes are available for use, this command allows you to register hardware that libvirt did not detect. The specified file should contain the XML description for the top level <device> description of the host device. For an example of such file, see Example 20.65, "How to retrieve the XML file for a device" . Example 20.63. How to create a device from an XML file In this example, you have already created an XML file for your PCI device and have saved it as scsi_host2.xml . The following command enables you to attach this device to your guests: # virsh nodedev-create scsi_host2.xml 20.27.10. Removing a Device The virsh nodedev-destroy command removes the device from the host. Note that the virsh node device driver does not support persistent configurations, so rebooting the host machine makes the device usable again. Also note that different assignments expect the device to be bound to different back-end driver (vfio, kvm). Using the --driver argument allows you to specify the intended back-end driver. Example 20.64. How to remove a device from a host physical machine The following example removes a SCSI device named scsi_host2 from the host machine: # virsh nodedev-destroy scsi_host2 20.27.11. Collect Device Configuration Settings The virsh nodedev-dumpxml device command outputs the XML representation for the specified host device, including information such as the device name, the bus to which the device is connected, the vendor, product ID, capabilities, as well as any information usable by libvirt . The argument device can either be a device name or WWN pair in WWNN, WWPN format (HBA only). Example 20.65. How to retrieve the XML file for a device The following example retrieves the XML file for a SCSI device identified as scsi_host2 . The name was obtained by using the virsh nodedev-list command: # virsh nodedev-dumpxml scsi_host2 <device> <name>scsi_host2</name> <parent>scsi_host1</parent> <capability type='scsi_host'> <capability type='fc_host'> <wwnn>2001001b32a9da5b</wwnn> <wwpn>2101001b32a9da5b</wwpn> </capability> </capability> </device> 20.27.12. Triggering a Reset for a Device The virsh nodedev-reset device command triggers a device reset for the specified device. Running this command is useful prior to transferring a node device between guest virtual machine pass through or the host physical machine. libvirt will do this action automatically, when required, but this command allows an explicit reset when needed. Example 20.66. How to reset a device on a guest virtual machine The following example resets the device on the guest virtual machine named scsi_host2 : # virsh nodedev-reset scsi_host2 | [
"virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 1199 MHz CPU socket(s): 1 Core(s) per socket: 2 Thread(s) per core: 2 NUMA cell(s): 1 Memory size: 3715908 KiB",
"<numatune> <memory mode='strict' nodeset='0,2-3'/> </numatune>",
"virsh freecell Total: 684096 KiB",
"virsh freecell --all 0: 804676 KiB -------------------- Total: 804676 KiB",
"virsh freecell --cellno 0 0: 772496 KiB",
"virsh nodecpumap CPUs present: 4 CPUs online: 1 CPU map: y",
"virsh nodecpustats user: 1056442260000000 system: 401675280000000 idle: 7549613380000000 iowait: 94593570000000",
"virsh nodecpustats 2 --percent usage: 2.0% user: 1.0% system: 1.0% idle: 98.0% iowait: 0.0%",
"lsusb -v idVendor 0x17ef Lenovo idProduct 0x480f Integrated Webcam [R5U877]",
"<hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x17ef'/> <product id='0x480f'/> </source> </hostdev>",
"virsh nodedev-list --tree computer | +- net_lo_00_00_00_00_00_00 +- net_macvtap0_52_54_00_12_fe_50 +- net_tun0 +- net_virbr0_nic_52_54_00_03_7d_cb +- pci_0000_00_00_0 +- pci_0000_00_02_0 +- pci_0000_00_16_0 +- pci_0000_00_19_0 | | | +- net_eth0_f0_de_f1_3a_35_4f [...]",
"virsh nodedev-list --cap scsi scsi_0_0_0_0"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-NUMA_node_management |
Chapter 7. Registering the System and Managing Subscriptions | Chapter 7. Registering the System and Managing Subscriptions The subscription service provides a mechanism to handle Red Hat software inventory and allows you to install additional software or update already installed programs to newer versions using the yum package manager. In Red Hat Enterprise Linux 7 the recommended way to register your system and attach subscriptions is to use Red Hat Subscription Management . Note It is also possible to register the system and attach subscriptions after installation during the initial setup process. For detailed information about the initial setup see the Initial Setup chapter in the Installation Guide for Red Hat Enterprise Linux 7. Note that the Initial Setup application is only available on systems installed with the X Window System at the time of installation. 7.1. Registering the System and Attaching Subscriptions Complete the following steps to register your system and attach one or more subscriptions using Red Hat Subscription Management. Note that all subscription-manager commands are supposed to be run as root . Run the following command to register your system. You will be prompted to enter your user name and password. Note that the user name and password are the same as your login credentials for Red Hat Customer Portal. Determine the pool ID of a subscription that you require. To do so, type the following at a shell prompt to display a list of all subscriptions that are available for your system: For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to your subscription. To list subscriptions for all architectures, add the --all option. The pool ID is listed on a line beginning with Pool ID . Attach the appropriate subscription to your system by entering a command as follows: Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, at any time, run: For more details on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see the designated solution article . For comprehensive information about subscriptions, see the Red Hat Subscription Management collection of guides. 7.2. Managing Software Repositories When a system is subscribed to the Red Hat Content Delivery Network, a repository file is created in the /etc/yum.repos.d/ directory. To verify that, use yum to list all enabled repositories: Red Hat Subscription Management also allows you to manually enable or disable software repositories provided by Red Hat. To list all available repositories, use the following command: The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Where version is the Red Hat Enterprise Linux system version ( 6 or 7 ), and variant is the Red Hat Enterprise Linux system variant ( server or workstation ), for example: To enable a repository, enter a command as follows: Replace repository with the name of the repository to enable. Similarly, to disable a repository, use the following command: Section 9.5, "Configuring Yum and Yum Repositories" provides detailed information about managing software repositories using yum . If you want to update the repositories automatically, you can use the yum-cron service. For more information, see Section 9.7, "Automatically Refreshing Package Database and Downloading Updates with Yum-cron" . 7.3. Removing Subscriptions To remove a particular subscription, complete the following steps. Determine the serial number of the subscription you want to remove by listing information about already attached subscriptions: The serial number is the number listed as serial . For instance, 744993814251016831 in the example below: Enter a command as follows to remove the selected subscription: Replace serial_number with the serial number you determined in the step. To remove all subscriptions attached to the system, run the following command: 7.4. Additional Resources For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see the resources listed below. Installed Documentation subscription-manager (8) - the manual page for Red Hat Subscription Management provides a complete list of supported options and commands. Related Books Red Hat Subscription Management collection of guides - These guides contain detailed information how to use Red Hat Subscription Management. Installation Guide - see the Initial Setup chapter for detailed information on how to register during the initial setup process. See Also Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 9, Yum provides information about using the yum packages manager to install and update software. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool=pool_id",
"subscription-manager list --consumed",
"repolist",
"subscription-manager repos --list",
"rhel- version - variant -rpms rhel- version - variant -debug-rpms rhel- version - variant -source-rpms",
"rhel- 7 - server -rpms rhel- 7 - server -debug-rpms rhel- 7 - server -source-rpms",
"subscription-manager repos --enable repository",
"subscription-manager repos --disable repository",
"subscription-manager list --consumed",
"SKU: ES0113909 Contract: 01234567 Account: 1234567 Serial: 744993814251016831 Pool ID: 8a85f9894bba16dc014bccdd905a5e23 Active: False Quantity Used: 1 Service Level: SELF-SUPPORT Service Type: L1-L3 Status Details: Subscription Type: Standard Starts: 02/27/2015 Ends: 02/27/2016 System Type: Virtual",
"subscription-manager remove --serial=serial_number",
"subscription-manager remove --all"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/chap-subscription_and_support-registering_a_system_and_managing_subscriptions |
Chapter 4. Technology previews | Chapter 4. Technology previews This section provides an overview of Technology Preview features introduced or updated in this release of Red Hat Ceph Storage. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https: 4.1. Block Devices (RBD) Mapping RBD images to NBD images The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) and enables Ceph clients to access volumes and images in Kubernetes environments. To use rbd-nbd , install the rbd-nbd package. For details, see the rbd-nbd(7) manual page. 4.2. Object Gateway Object Gateway archive site With this release an archive site is supported as a Technology Preview. The archive site allows you to have a history of versions of S3 objects that can only be eliminated through the gateways associated with the archive zone. Including an archive zone in a multizone configuration allows you to have the flexibility of an S3 object history in only one zone while saving the space that the replicas of the versions S3 objects would consume in the rest of the zones. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/release_notes/technology-previews |
Chapter 15. Using the logging system role | Chapter 15. Using the logging system role As a system administrator, you can use the logging system role to configure a Red Hat Enterprise Linux host as a logging server to collect logs from many client systems. 15.1. Filtering local log messages by using the logging RHEL system role You can use the property-based filter of the logging RHEL system role to filter your local log messages based on various conditions. As a result, you can achieve for example: Log clarity: In a high-traffic environment, logs can grow rapidly. The focus on specific messages, like errors, can help to identify problems faster. Optimized system performance: Excessive amount of logs is usually connected with system performance degradation. Selective logging for only the important events can prevent resource depletion, which enables your systems to run more efficiently. Enhanced security: Efficient filtering through security messages, like system errors and failed logins, helps to capture only the relevant logs. This is important for detecting breaches and meeting compliance standards. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1] The settings specified in the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: files option supports storing logs in the local files, usually in the /var/log/ directory. The property: msg ; property: contains ; and property_value: error options specify that all logs that contain the error string are stored in the /var/log/errors.log file. The property: msg ; property: !contains ; and property_value: error options specify that all other logs are put in the /var/log/others.log file. You can replace the error value with the string by which you want to filter. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [files_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [files_output0, files_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the managed node, test the syntax of the /etc/rsyslog.conf file: On the managed node, verify that the system sends messages that contain the error string to the log: Send a test message: View the /var/log/errors.log log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) man pages on your system 15.2. Applying a remote logging solution by using the logging RHEL system role You can use the logging RHEL system role to configure a remote logging solution, where one or more clients take logs from the systemd-journal service and forward them to a remote server. The server receives remote input from the remote_rsyslog and remote_files configurations, and outputs the logs to local files in directories named by remote host names. As a result, you can cover use cases where you need for example: Centralized log management: Collecting, accessing, and managing log messages of multiple machines from a single storage point simplifies day-to-day monitoring and troubleshooting tasks. Also, this use case reduces the need to log into individual machines to check the log messages. Enhanced security: Storing log messages in one central place increases chances they are in a secure and tamper-proof environment. Such an environment makes it easier to detect and respond to security incidents more effectively and to meet audit requirements. Improved efficiency in log analysis: Correlating log messages from multiple systems is important for fast troubleshooting of complex problems that span multiple machines or services. That way you can quickly analyze and cross-reference events from different sources. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Define the ports in the SELinux policy of the server or client system and open the firewall for those ports. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, see modify the SELinux policy on the client and server systems . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1] The settings specified in the first play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: remote option covers remote inputs from the other logging system over the network. The udp_ports: [ 601 ] option defines a list of UDP port numbers to monitor. The tcp_ports: [ 601 ] option defines a list of TCP port numbers to monitor. If both udp_ports and tcp_ports is set, udp_ports is used and tcp_ports is dropped. logging_outputs Defines a list of logging output dictionaries. The type: remote_files option makes output store logs to the local files per remote host and program name originated the logs. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [remote_udp_input, remote_tcp_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [remote_files_output] option specifies a list of outputs, to which the logs are sent. The settings specified in the second play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: forwards option supports sending logs to the remote logging server over the network. The severity: info option refers to log messages of the informative importance. The facility: mail option refers to the type of system program that is generating the log message. The target: <host1.example.com> option specifies the hostname of the remote logging server. The udp_port: 601 / tcp_port: 601 options define the UDP/TCP ports on which the remote logging server listens. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [basic_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [forward_output0, forward_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On both the client and the server system, test the syntax of the /etc/rsyslog.conf file: Verify that the client system sends messages to the server: On the client system, send a test message: On the server system, view the /var/log/ <host2.example.com> /messages log, for example: Where <host2.example.com> is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 15.3. Using the logging RHEL system role with TLS Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network. You can use the logging RHEL system role to configure a secure transfer of log messages, where one or more clients take logs from the systemd-journal service and transfer them to a remote server while using TLS. Typically, TLS for transferring logs in a remote logging solution is used when sending sensitive data over less trusted or public networks, such as the Internet. Also, by using certificates in TLS you can ensure that the client is forwarding logs to the correct and trusted server. This prevents attacks like "man-in-the-middle". 15.3.1. Configuring client logging with TLS You can use the logging RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically when the logging_certificates variable is set. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file /usr/share/doc/rhel-system-roles/certificate/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 15.3.2. Configuring server logging with TLS You can use the logging RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the server group in the Ansible inventory. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 15.4. Using the logging RHEL system roles with RELP Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss. The RELP sender transfers log entries in the form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery. You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. To achieve that use case, you can use the logging RHEL system role to configure the logging system to reliably send and receive log entries. 15.4.1. Configuring client logging with RELP You can use the logging RHEL system role to configure a transfer of log messages stored locally to the remote logging system with RELP. This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client] The settings specified in the example playbook include the following: target This is a required parameter that specifies the host name where the remote logging system is running. port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_servers List of servers that will be allowed by the logging client to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 15.4.2. Configuring server logging with RELP You can use the logging RHEL system role to configure a server for receiving log messages from the remote logging system with RELP. This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output The settings specified in the example playbook include the following: port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_clients List of clients that will be allowed by the logging server to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 15.5. Additional resources Preparing a control node and managed nodes to use RHEL system roles Documentation installed with the rhel-system-roles package in /usr/share/ansible/roles/rhel-system-roles.logging/README.html . RHEL system roles ansible-playbook(1) man page on your system | [
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.",
"logger error",
"cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.",
"logger test",
"cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/assembly_using-the-logging-system-role_security-hardening |
Chapter 1. Introduction to the Red Hat Enterprise Linux software certification | Chapter 1. Introduction to the Red Hat Enterprise Linux software certification The Red Hat Enterprise Linux Software Certification Policy Guide describes the policy overview to certify third-party vendor products running on Red Hat Enterprise Linux (RHEL) 8 and 9. This guide is intended for partners who want to offer their products for use with RHEL in a jointly supported customer environment. A strong working knowledge of RHEL is required. 1.1. Certification and Partner validation Red Hat offers you the ability to certify or validate your products. Red Hat-certified products undergo thorough testing and are collaboratively supported with you. These products meet your standards and Red Hat's criteria, including functionality, interoperability, lifecycle management, security, and support requirements. Partner-validated products are tested and supported by you. Validation allows you to enable and publish your software offerings more quickly. However, by definition, validated workloads do not include the full thoroughness of Red Hat certification. We encourage you to continue efforts toward stabilization, upstream acceptance, Red Hat enablement, and Red Hat certification. Note The validation option is not available for all infrastructure software. Understanding the differences between certification and validation, along with the capabilities, limitations, and achievements of your products, is essential for you and your customers. 1.2. Support responsibilities Red Hat customers receive the best support experience when using components from our robust ecosystem of certified enterprise hardware, software, and cloud partners. Red Hat provides support for Red Hat-certified products and Red Hat software according to the Red Hat Service Level Agreement (SLA). If a certified or validated third-party component is involved in a customer issue, Red Hat collaborates with you to resolve it according to the Third party support policy . Red Hat does not stipulate customer support policies. However, we require your support in assisting customers with diagnosing and resolving issues related to the functionality, interoperability, lifecycle management, and security of your software in conjunction with ours. Being listed as certified or validated in the Red Hat Ecosystem Catalog indicates your commitment to supporting your products and providing reliable solutions for our joint customers, adhering to your policies with Red Hat products. 1.3. Certification prerequisites and process overview 1.3.1. Prerequisites To start your certification journey, you must: Join the Red Hat Partner Connect program. Accept the standard Partner Agreements along with the terms and conditions specific to containerized software. Enter basic information about your company and the products you need to certify. Common information includes a product overview and links to supporting collateral such as product documentation, datasheets, or other relevant resources. Test your product to verify that it behaves as intended on RHEL. Support RHEL as a platform for the product being certified and establish a support relationship with Red Hat. You can do this through the multi-vendor support network of TSANet , or through a custom support agreement. 1.3.2. Process overview The Red Hat certification and partner validation procedures are outlined below. See the Red Hat Software Certification Workflow Guide for details on how to complete each step listed below. 1.3.2.1. Certification procedure Complete the prerequisites On Red Hat Connect Create your product Create and associate components for each product component Complete the product listing checklist Complete the certification requirements for each component as appropriate Container Images On Red Hat Connect Complete the component certification checklist for each component Publish your components Publish your product 1.3.2.2. Validation procedure Complete the prerequisites On Red Hat Connect Create your product Complete your Product List details Create a Validation request Complete the product listing checklist Complete the validation checklist Fill in the questionnaire Wait for Red Hat to review and approve the questionnaire On Red Hat Connect Publish your product Additional resources General Program Guide for Partners . 1.4. Test suite lifecycle Use the latest version of the test suit to certify your products. After the release of a new test suite version, Red Hat accepts test results generated with the earlier version of the suite for a period of 90 days. During this period, the Red Hat certification team may require you to run the tests with the latest version of the suite if they consider that it is more suitable for your certification project. Additional resources Red Hat Certification test suite download link 1.5. Red Hat Enterprise Linux versions and architecture A Red Hat Enterprise Linux software certification is architecture-specific and does not carry over to any other architecture. You must certify your product on each version and architecture of RHEL that it supports. The following table shows the RHEL versions, processor architectures, and hypervisor software that can you can combine in a certification: RHEL version Architecture Hypervisor RHEL 8 RHEL 9 x86_64 Kernel-based Virtual Machine (KVM) VMWare HyperV Red Hat Virtualization (RHV) ppc64le s390x aarch64 Red Hat grants RHEL software certifications on specific RHEL 8 or RHEL 9 minor versions. The certification is valid for subsequent minor releases of RHEL if you follow the compatibility guidelines documented in the Red Hat Enterprise Linux: Application Compatibility Guide. Red Hat recommends partners to retest their products with each new minor version of RHEL. Additional resources Red Hat Enterprise Linux 8: Application Compatibility Guide Red Hat Enterprise Linux 9: Application Compatibility Guide 1.6. Partner's product versions Red Hat grants RHEL software certifications to specific major releases of your product. You should run the certification tests on minor releases of the product to avoid functional regressions, but you do not need to certify the product again. You must certify subsequent major releases of the product either as a new version of the existing product or as a new product entry. It is your responsibility to decide which releases of their product are major and which releases are minor. Additional resources Red Hat Enterprise Linux 8: Application Compatibility Guide Red Hat Enterprise Linux 9: Application Compatibility Guide 1.7. Packaging format Products targeted for certification can use any packaging format provided it does not alter the RHEL platform in a way that impacts its support. Red Hat recommends that you use packaging formats compatible with the platform's native tools, such as containers and RPMs. Any components packaged as containers must follow the requirements established in Container image requirements . 1.8. Publishing When you complete the Red Hat Enterprise Linux (RHEL) Certification or the partner validation workflow, Red Hat publishes an entry in the Red Hat Ecosystem Catalog . This includes a product entry and relevant information collected during the process. Products with certifications include the associated component data for containers. Products without any certifications do not include component information. Red Hat expects that a RHEL software certification remains listed in the catalog until the end of support life for the certified RHEL version. However, Red Hat reserves the right to remove a catalog entry. 1.9. Catalog entries Red Hat expects that a RHEL software certification remains listed in the catalog until the end of support life for the certified RHEL version. However, Red Hat reserves the right to remove a catalog entry. 1.10. Distribution of certified container images The Red Hat Container Certification program offers the following options for the distribution of certified container images: Red Hat Container Registry : Managed by Red Hat at no cost to partners. This option requires compliance with U.S. export control laws. For more information, see the Export compliance guide . Non-Red Hat Container Registry : For example, your own registry, or any public registry such as Quay.io and Docker.io . | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_enterprise_linux_software_certification_policy_guide/assembly_introduction_isv-pol-guide |
Chapter 69. JmxTransQueryTemplate schema reference | Chapter 69. JmxTransQueryTemplate schema reference Used in: JmxTransSpec Property Property type Description targetMBean string If using wildcards instead of a specific MBean then the data is gathered from multiple MBeans. Otherwise if specifying an MBean then data is gathered from that specified MBean. attributes string array Determine which attributes of the targeted MBean should be included. outputs string array List of the names of output definitions specified in the spec.kafka.jmxTrans.outputDefinitions that have defined where JMX metrics are pushed to, and in which data format. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-JmxTransQueryTemplate-reference |
Chapter 2. OpenShift CLI (oc) | Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting OpenShift Container Platform operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select appropriate oc binary for your Linux platform, and then click Download oc for Linux . Save the file. Unpack the archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for Windows platform, and then click Download oc for Windows for x86_64 . Save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64 . Note For macOS arm64, click Download oc for Mac for ARM 64 . Save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account. Note It is not supported to install the OpenShift CLI ( oc ) as an RPM for Red Hat Enterprise Linux (RHEL) 9. You must install the OpenShift CLI for RHEL 9 by downloading the binary. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.13. # subscription-manager repos --enable="rhocp-4.13-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients After you install the CLI, it is available using the oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Run the following command to install the openshift-cli package: USD brew install openshift-cli 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. The OpenShift CLI ( oc ) is installed. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the OpenShift Container Platform server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift Container Platform CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Logging in to the OpenShift CLI using a web browser You can log in to the OpenShift CLI ( oc ) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line. Warning Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You must have a browser installed. Procedure Enter the oc login command with the --web flag: USD oc login <cluster_url> --web 1 1 Optionally, you can specify the server URL and callback port. For example, oc login <cluster_url> --web --callback-port 8280 localhost:8443 . The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the OpenShift Container Platform server oc tries to open the web console of the cluster specified in the current oc configuration file. If no oc configuration exists, oc prompts interactively for the server URL. Example output Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session. If more than one identity provider is available, select your choice from the options provided. Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text access token received successfully; please return to your terminal . Check the CLI for a login confirmation. Example output Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Note The web console defaults to the profile used in the session. To switch between Administrator and Developer profiles, log out of the OpenShift Container Platform web console and clear the cache. You can now create a project or issue other commands for managing your cluster. 2.1.5. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.5.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.5.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.5.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.5.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.5.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.6. Getting help You can get help with CLI commands and OpenShift Container Platform resources in the following ways: Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.7. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Container Platform , or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional OpenShift Container Platform features, including: Full support for OpenShift Container Platform resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives. Authentication The oc binary offers a built-in login command for authentication and lets you work with projects, which map Kubernetes namespaces to authenticated users. Read Understanding authentication for more information. Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in OpenShift Container Platform 4.13 . If you want the latest features, you must download and install the latest version of the oc binary corresponding to your OpenShift Container Platform server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of user authentication an OpenShift Container Platform server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple OpenShift Container Platform servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with the oc CLI for the first time, OpenShift Container Platform creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for OpenShift Container Platform clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift Container Platform CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift Container Platform CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the OpenShift Container Platform CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift Container Platform CLI, you must install the plugin before use. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. Managing CLI plugins with Krew You can use Krew to install and manage plugins for the OpenShift CLI ( oc ). Important Using Krew to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.6.1. Installing a CLI plugin with Krew You can install a plugin for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. Procedure To list all available plugins, run the following command: USD oc krew search To get information about a plugin, run the following command: USD oc krew info <plugin_name> To install a plugin, run the following command: USD oc krew install <plugin_name> To list all plugins that were installed by Krew, run the following command: USD oc krew list 2.6.2. Updating a CLI plugin with Krew You can update a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To update a single plugin, run the following command: USD oc krew upgrade <plugin_name> To update all plugins that were installed by Krew, run the following command: USD oc krew upgrade 2.6.3. Uninstalling a CLI plugin with Krew You can uninstall a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To uninstall a plugin, run the following command: USD oc krew uninstall <plugin_name> 2.6.4. Additional resources Krew Extending the OpenShift CLI with plugins 2.7. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. For administrator commands, see the OpenShift CLI administrator command reference . Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) developer commands 2.7.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.7.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.7.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.7.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' - i.e. expand wildcard characters in file names oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap 2.7.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.7.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.7.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.7.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.7.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.7.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.7.1.11. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.7.1.12. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.7.1.13. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.7.1.14. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.7.1.15. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.7.1.16. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.7.1.17. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.7.1.18. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.7.1.19. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.7.1.20. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.7.1.21. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.7.1.22. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.7.1.23. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.7.1.24. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.7.1.25. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set proxy url for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.7.1.26. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.7.1.27. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.7.1.28. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.7.1.29. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.7.1.30. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.7.1.31. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.7.1.32. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.7.1.33. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.7.1.34. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.7.1.35. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.7.1.36. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.7.1.37. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.38. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.7.1.39. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 2.7.1.40. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.7.1.41. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.7.1.42. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.7.1.43. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.7.1.44. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.7.1.45. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.7.1.46. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.7.1.47. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.7.1.48. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.7.1.49. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.7.1.50. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.7.1.51. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 2.7.1.52. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.7.1.53. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.7.1.54. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.7.1.55. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.7.1.56. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.57. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key 2.7.1.58. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.7.1.59. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.7.1.60. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.7.1.61. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.7.1.62. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.7.1.63. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific uid oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.7.1.64. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.7.1.65. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.7.1.66. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.7.1.67. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' - i.e. expand wildcard characters in file names oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.7.1.68. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.7.1.69. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.7.1.70. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the deployment/mydeployment's status subresource oc edit deployment mydeployment --subresource='status' 2.7.1.71. oc events List events Example usage # List recent events in the default namespace. oc events # List recent events in all namespaces. oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive. oc events --for pod/web-pod-13je7 --watch # List recent events in given format. Supported ones, apart from default, are json and yaml. oc events -oyaml # List recent only events in given event types oc events --types=Warning,Normal 2.7.1.72. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.7.1.73. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers 2.7.1.74. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.7.1.75. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.7.1.76. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List status subresource for a single pod. oc get pod web-pod-13je7 --subresource status 2.7.1.77. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.7.1.78. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: Wildcard filter is not supported with append. Pass a single os/arch to append oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz 2.7.1.79. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract. Pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.7.1.80. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.7.1.81. oc image mirror Mirror images from one repository to another Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* 2.7.1.82. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag which points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.7.1.83. oc kustomize Build a kustomization target from a directory or URL. Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.7.1.84. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.7.1.85. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass 2.7.1.86. oc logout End the current server session Example usage # Log out oc logout 2.7.1.87. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.7.1.88. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.7.1.89. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.7.1.90. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.7.1.91. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe namespaces -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.7.1.92. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the scale subresource using a merge patch. oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.7.1.93. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.7.1.94. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.7.1.95. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.7.1.96. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.97. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.7.1.98. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.7.1.99. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.7.1.100. oc projects Display existing projects Example usage # List all projects oc projects 2.7.1.101. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.7.1.102. oc registry info Print information about the integrated registry Example usage # Display information about the integrated registry oc registry info 2.7.1.103. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.7.1.104. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.7.1.105. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.7.1.106. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.7.1.107. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.7.1.108. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.7.1.109. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.7.1.110. oc rollout restart Restart a resource Example usage # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.7.1.111. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.7.1.112. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.7.1.113. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.7.1.114. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.7.1.115. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/sheduled 2.7.1.116. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.7.1.117. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.7.1.118. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.7.1.119. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.7.1.120. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.7.1.121. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.7.1.122. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.7.1.123. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.7.1.124. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.7.1.125. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.7.1.126. oc set image Update the image of a pod template Example usage # Set a deployment configs's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment configs's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.7.1.127. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.7.1.128. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.7.1.129. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.7.1.130. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.7.1.131. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.7.1.132. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.7.1.133. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.7.1.134. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.7.1.135. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (pvc) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.7.1.136. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.7.1.137. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.7.1.138. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.7.1.139. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client 2.7.1.140. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity): oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running". oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.7.1.141. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.7.2. Additional resources OpenShift CLI administrator command reference 2.8. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.8.1. OpenShift CLI (oc) administrator commands 2.8.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.8.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageContentSourcePolicy.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageContentSourcePolicies generated by oc adm catalog mirror oc delete imagecontentsourcepolicy -l operators.openshift.org/catalog=true 2.8.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.8.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.8.1.5. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.8.1.6. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.8.1.7. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.8.1.8. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.8.1.9. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.8.1.10. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.8.1.11. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.8.1.12. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.8.1.13. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm groups prune --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm groups prune --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.14. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.8.1.15. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in a whitelist file with an LDAP server oc adm groups sync --whitelist=/path/to/whitelist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.8.1.16. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.8.1.17. oc adm migrate icsp Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s). Example usage # update the imagecontentsourcepolicy.yaml to new imagedigestmirrorset file under directory mydir oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir 2.8.1.18. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.8.1.19. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod-dir oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.8.1.20. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.8.1.21. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/logs oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron 2.8.1.22. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.8.1.23. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.8.1.24. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.8.1.25. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.8.1.26. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.8.1.27. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.8.1.28. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.8.1.29. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.8.1.30. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.8.1.31. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.8.1.32. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.33. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure http protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.8.1.34. oc adm release extract Extract the contents of an update payload to disk Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.8.1.35. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.8.1.36. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.8.1.37. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repo oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \ -- 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 2.8.1.38. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.8.1.39. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.8.1.40. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.8.1.41. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.8.1.42. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.8.1.43. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.8.1.44. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # Review the available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.8.1.45. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.8.2. Additional resources OpenShift CLI developer command reference | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\"",
"yum install openshift-clients",
"oc <command>",
"brew install openshift-cli",
"oc login -u user1",
"Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.",
"oc login <cluster_url> --web 1",
"Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.",
"Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>",
"oc new-project my-project",
"Now using project \"my-project\" on server \"https://openshift.example.com:6443\".",
"oc new-app https://github.com/sclorg/cakephp-ex",
"--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>",
"oc logs cakephp-ex-1-deploy",
"--> Scaling cakephp-ex-1 to 1 --> Success",
"oc project",
"Using project \"my-project\" on server \"https://openshift.example.com:6443\".",
"oc status",
"In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.",
"oc api-resources",
"NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap",
"oc help",
"OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application",
"oc create --help",
"Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]",
"oc explain pods",
"KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources",
"oc logout",
"Logged \"user1\" out on \"https://openshift.example.com\"",
"oc completion bash > oc_bash_completion",
"sudo cp oc_bash_completion /etc/bash_completion.d/",
"cat >>~/.zshrc<<EOF if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF",
"apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k",
"oc status",
"status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.",
"oc project",
"Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".",
"oc project alice-project",
"Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".",
"oc login -u system:admin -n default",
"oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]",
"oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]",
"oc config use-context <context_nickname>",
"oc config set <property_name> <property_value>",
"oc config unset <property_name>",
"oc config view",
"oc config view --config=<specific_filename>",
"oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0",
"oc config view",
"apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0",
"oc config set-context `oc config current-context` --namespace=<project_name>",
"oc whoami -c",
"#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"",
"chmod +x <plugin_file>",
"sudo mv <plugin_file> /usr/local/bin/.",
"oc plugin list",
"The following compatible plugins are available: /usr/local/bin/<plugin_file>",
"oc ns",
"oc krew search",
"oc krew info <plugin_name>",
"oc krew install <plugin_name>",
"oc krew list",
"oc krew upgrade <plugin_name>",
"oc krew upgrade",
"oc krew uninstall <plugin_name>",
"Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-",
"Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io",
"Print the supported API versions oc api-versions",
"Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' - i.e. expand wildcard characters in file names oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap",
"Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json",
"Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true",
"View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json",
"Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx",
"Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo",
"Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml",
"Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80",
"Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new",
"Print the address of the control plane and cluster services oc cluster-info",
"Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state",
"Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE",
"Display the current-context oc config current-context",
"Delete the minikube cluster oc config delete-cluster minikube",
"Delete the context for the minikube cluster oc config delete-context minikube",
"Delete the minikube user oc config delete-user minikube",
"List the clusters that oc knows about oc config get-clusters",
"List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context",
"List the users that oc knows about oc config get-users",
"Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name",
"Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true",
"Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set proxy url for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4",
"Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin",
"Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-",
"Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace",
"Use the context for the minikube cluster oc config use-context minikube",
"Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'",
"!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar",
"Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json",
"Create a new build oc create build myapp",
"Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10",
"Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"",
"Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1",
"Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env",
"Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date",
"Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701",
"Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx",
"Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones",
"Create a new image stream oc create imagestream mysql",
"Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0",
"Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"",
"Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob",
"Create a new namespace named my-namespace oc create namespace my-namespace",
"Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%",
"Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"",
"Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort",
"Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status",
"Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1",
"Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets",
"Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com",
"Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend",
"If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json",
"Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env",
"Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key",
"Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"",
"Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com",
"Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080",
"Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080",
"Create a new service account named my-service-account oc create serviceaccount my-service-account",
"Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific uid oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc",
"Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"",
"Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones",
"Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns",
"Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' - i.e. expand wildcard characters in file names oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all",
"Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend",
"Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -",
"Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the deployment/mydeployment's status subresource oc edit deployment mydeployment --subresource='status'",
"List recent events in the default namespace. oc events # List recent events in all namespaces. oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive. oc events --for pod/web-pod-13je7 --watch # List recent events in given format. Supported ones, apart from default, are json and yaml. oc events -oyaml # List recent only events in given event types oc events --types=Warning,Normal",
"Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date",
"Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers",
"Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx",
"Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf",
"List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List status subresource for a single pod. oc get pod web-pod-13je7 --subresource status",
"Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt",
"Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: Wildcard filter is not supported with append. Pass a single os/arch to append oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz",
"Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract. Pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]",
"Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64",
"Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.*",
"Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag which points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm",
"Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6",
"Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-",
"Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass",
"Log out oc logout",
"Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container",
"List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml",
"Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp",
"Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"",
"Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe namespaces -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh",
"Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the scale subresource using a merge patch. oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'",
"List all available plugins oc plugin list",
"Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1",
"Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml",
"Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml",
"Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000",
"Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -",
"Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project",
"List all projects oc projects",
"To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api",
"Display information about the integrated registry oc registry info",
"Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS",
"Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json",
"Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json",
"Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx",
"View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3",
"Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json",
"Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx",
"Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx",
"Resume an already paused deployment oc rollout resume dc/nginx",
"Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend",
"Watch the status of the latest rollout oc rollout status dc/nginx",
"Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3",
"Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/sheduled",
"Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir",
"Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>",
"Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web",
"Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount",
"Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name",
"Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"",
"Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret",
"Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir",
"Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh",
"Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp",
"Set a deployment configs's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment configs's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml",
"Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all",
"Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30",
"Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml",
"Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero",
"Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -",
"Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml",
"Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml",
"Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main",
"List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (pvc) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>",
"Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait",
"See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest",
"Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d",
"Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client",
"Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity): oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\". oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s",
"Display the currently authenticated user oc whoami",
"Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all",
"Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageContentSourcePolicy.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageContentSourcePolicies generated by oc adm catalog mirror oc delete imagecontentsourcepolicy -l operators.openshift.org/catalog=true",
"Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp",
"Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp",
"Mark node \"foo\" as unschedulable oc adm cordon foo",
"Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml",
"Output a template for the error page to stdout oc adm create-error-template",
"Output a template for the login page to stdout oc adm create-login-template",
"Output a template for the provider selection page to stdout oc adm create-provider-selection-template",
"Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900",
"Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2",
"Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name",
"Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm groups prune --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm groups prune --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2",
"Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in a whitelist file with an LDAP server oc adm groups sync --whitelist=/path/to/whitelist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm",
"Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions",
"update the imagecontentsourcepolicy.yaml to new imagedigestmirrorset file under directory mydir oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir",
"Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm",
"Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod-dir oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh",
"Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'",
"Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/logs oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron",
"Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'",
"Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'",
"Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'",
"Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1",
"Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2",
"Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1",
"Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml",
"Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml",
"Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm",
"Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm",
"Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the blacklist file oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist file oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a whitelist oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure http protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm",
"Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x",
"Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported. Pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x",
"Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature",
"Create a release from the latest origin images and push to a DockerHub repo oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11",
"Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule",
"Show usage statistics for images oc adm top images",
"Show usage statistics for image streams oc adm top imagestreams",
"Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME",
"Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel",
"Mark node \"foo\" as schedulable oc adm uncordon foo",
"Review the available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true",
"Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/openshift-cli-oc |
Chapter 26. Desktop | Chapter 26. Desktop Poppler no longer renders certain characters incorrectly Previously, the Poppler library did not map correctly to character code. As a consequence, Poppler showed the fi string instead of showing the correct glyph, or nothing, if the font did not contain necessary glyphs. With this update, the characters previously replaced with the fi string are shown correctly. (BZ# 1298616 ) Poppler no longer tries to access memory behind the array Memory corruption due to exceeding the length of array caused the Poppler library to terminate unexpectedly. A fix has been applied to not allow Poppler to try to access memory behind the array, and Poppler no longer crashes in the described situation. (BZ#1299506) pdftocairo no longer crashes when processing a PDF without group color space Previously, the Poppler library tried to access a non-existing object when processing a PDF without group color space. As a consequence, the Poppler library terminated unexpectedly with a segmentation fault. A patch has been applied to verify if group color space exists. As a result, Poppler no longer crashes, and the pdftocairo utility works as expected in the described situation. (BZ#1299479) Poppler no longer terminates unexpectedly during text extraction Previously, a writing after the end of the lines array could cause a memory corruption. As a consequence, the Poppler library could terminate unexpectedly. A patch has been applied and array is now always relocated when an item is added. As a result, Poppler no longer crashes in the described situation. (BZ#1299481) Poppler no longer terminates unexpectedly due to a missing GfxSeparationColorSpace class Previously, the Poppler library tried to copy a non-existing GfxSeparationColorSpace class and as a consequence terminated unexpectedly. With this update, Poppler now checks for existence of the GfxSeparationColorSpace class, and as a result no longer crashes in the described situation. (BZ#1299490) pdfinfo no longer terminates unexpectedly due to asserting broken encryption information Previously, Poppler tried to obtain broken encryption owner information. As a consequence, the pdfinfo utility to terminate unexpectedly. A fix has been applied to fix this bug, and Poppler no longer asserts broken encryption information. As a result, pdfinfo no longer crashes in the described situation. (BZ#1299500) Evince no longer crashes when viewing a PDF Previously, screen annotation and form fields passed a NULL pointer to _poppler_action_new , and Poppler created a false PopplerAction when viewing certaing PDFs in the Evince application. As a consequence, Evince terminated unexpectedly with a segmentation fault. A patch has been applied to modify _poppler_annot_scren_new and poppler_form_field_get_action to pass PopplerDocument instead of NULL. As a result, Evince no longer crashes in the described situation. (BZ#1299503) Virtual machines started by GNOME Boxes are no longer accessible to every user Previously, virtual machines started by GNOME Boxes were listening on a local TCP socket. As a consequence, any user could connect to any virtual machine started by another user. A patch has been applied and GNOME Boxes no longer opens such sockets by default. As a result the virtual machines are now accessible through SPICE only to the user who owns the virtual machine. (BZ# 1043950 ) GNOME boxes rebased to version 3.14.3.1 The GNOME boxes application has been updated to version 3.14.3.1. Most notably,a patch to one bug has been applied as a part of this rebase: Previously, the virtual network computing (VNC) authentication parameters in the GNOME boxes application were not handled correctly. As a consequence, the connections to VNC servers with authentication failed. This bug has been fixed and the connection to VNC servers with authentication now works as expected. (BZ#1015199) FreeRDP now recognizes wildcard certificates Previously, wildcard certificates support was not implemented in FreeRDP. As a consequence, wildcard certificates were not recognized by FreeRDP , and the following warning was displayed when connecting: Missing functionality has been backported from upstream and code for comparing host names was improved. As a result, the mentioned prompt is no longer shown if a valid wildcard certificate is used. (BZ#1275241) Important security updates now installed automatically Previously, it was not possible to have security updates installed automatically. Even though GNOME notified the users about the available updates, they could choose to ignore the notification and not install the update. As a consequence, important updates could be left uninstalled. A gnome-shell extension is now available to enforce the installation of important updates. As a result, when new updates are available, a dialog window notifies the user that updates will be applied and they need to save their work. After a configurable amount of time, the system reboots to install the pending updates. (BZ# 1302864 ) Accounts' shells in accountsservice now always verified The accountsservice package heuristics for determining disabled accounts changed between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. As a consequence, users with UID outside of the range 500 - 1000 would appear in the user list even if their shell was invalid. A patch has been applied to always verify the account's shell before the account is treated as a listable user account. As a result, the users with /sbin/nologin as a shell are now filtered out. (BZ#1341276) New way to handle desktop in Nautilus 3 Previously, icons in Nautilus 3 on the desktop were managed by taking the biggest monitor and trying to adapt the desktop window to the minimum common shape that would fit a rectangle. As a consequence, the icons could not be placed in random areas in some of the monitors, which could cause confusion for the user. This behavior has been changed to restrict the desktop window shape to the primary monitor. Even though this change does not allow to use all available monitors as part of the desktop, it fixes the described bug. (BZ#1207646) GLX support in Xvnc sessions The GLX support code in Xvnc requires the use of the libGL library. If a third-party driver was installed and replaced libGL, Xvnc sessions launched with no GLX support. Consequently, 3D applications did not work under Xvnc. With this update, Xvnc has been rebuilt to require libGL, which is assumed to be installed in /usr/lib64/ . Now, third-party drivers installed in a sub-directory no longer conflict with Xvnc, which now initializes GLX successfully. As a result, GLX functionality is available again in Xvnc sessions. Note that client applications connecting to Xvnc need to use the same libGL version as the Xvnc server, which may require the use of the LD_LIBRARY_PATH environment variable. (BZ#1326867) Flat document collections When using the gnome-documents application, it was possible include one collection into another and then vice versa at the same time. Consequently, the application terminated unexpectedly. This update ensures that the collections are flat and do not allow circular chains of collections, thus fixing this bug. (BZ# 958690 ) control-center no longer crashes when querying with special characters Previously, text entered by users when searching for a new printer required a specific character-set. Consequently, the control-center utility could terminate unexpectedly when searching for a printer name that contained a special character. With this update, the text is encoded into a valid ASCII format. As a result, control-center no longer crashes and correctly queries for printers. (BZ#1298952) gnome-control-center no longer crashes because of zero-length string Previously, the gnome-control-center utility worked with an empty string and an invalid pointer. As a consequence, it terminated unexpectedly. The gnome-control-center utility now checks whether the given application's identifier is at least 1 character long and initializes the new_app_ids pointer. As a result, the stated problem no longer occurs. (BZ#1298951) The Release Notes package is now installed correctly Previously, due to the naming of the Red Hat Enterprise Linux Release Notes packages, the packages were not installed on systems with a different language configured than English. This update provides additional parsing rules in the yum-languagepacks package. As a result, the Release Notes package is now installed correctly. (BZ#1263241) The LibreOffice language pack is now installed correctly for pt_BR , zh_CN , and zh_TW localizations Previously, translated libreoffice-langpack packages were not automatically installed on systems using language packs for the pt_BR , zh_CN , and zh_TW localizations. Parsing rules have been added to the yum language plug-in to address the problem. As a result, the correct LibreOffice language pack is installed. (BZ#1251388) | [
"WARNING: CERTIFICATE NAME MISMATCH!"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/bug_fixes_desktop |
Part VII. Designing a decision service using guided rule templates | Part VII. Designing a decision service using guided rule templates As a business analyst or business rules developer, you can define business rule templates using the guided rule templates designer in Business Central. These guided rule templates provide a reusable rule structure for multiple rules that are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. Note You can also design your decision service using Decision Model and Notation (DMN) models instead of rule-based or table-based assets. For information about DMN support in Red Hat Process Automation Manager 7.13, see the following resources: Getting started with decision services (step-by-step tutorial with a DMN decision service example) Designing a decision service using DMN models (overview of DMN support and capabilities in Red Hat Process Automation Manager) Prerequisites The space and project for the guided rule templates have been created in Business Central. Each asset is associated with a project assigned to a space. For details, see Getting started with decision services . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assembly-guided-rule-templates |
8.2 Release Notes | 8.2 Release Notes Red Hat Enterprise Linux 8.2 Release Notes for Red Hat Enterprise Linux 8.2 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/index |
Chapter 4. Migrating Camel Quarkus projects | Chapter 4. Migrating Camel Quarkus projects 4.1. Updating projects to the latest Quarkus version We recommend that you use Maven to update and upgrade your projects to the latest Quarkus version. Important For projects that use Hibernate ORM or Hibernate Reactive, review the Hibernate ORM 5 to 6 migration quick reference. The following update command covers only a subset of this guide. 4.1.1. Prerequisites Roughly 30 minutes JDK installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 Optionally, the Quarkus CLI if you want to use it A project based on Camel Quarkus version 2.13 or later. 4.1.2. Updating with Maven Configure your extension registry client as described in the Configuring Quarkus extension registry client section of the Quarkus Getting Started guide. Update with Maven: Go to the project directory and update the project to the latest stream: Ensure that the Quarkus Maven plugin version aligns with the latest supported Red Hat build of Quarkus version. Run the update with the following command: mvn io.quarkus.platform:quarkus-maven-plugin:3.8.6:update -N For multi-module projects , always first try the following command: mvn io.quarkus.platform:quarkus-maven-plugin:3.8.6:update If this command fails, you can instead try this longer command: find . -type f -name "pom.xml" -execdir sh -c 'mvn io.quarkus.platform:quarkus-maven-plugin:3.8.6:update -N' \; Note Due to an issue with OpenRewrite , warnings are preset in the migration log. Optional By default, this command updates to the latest current version. To update to a specific stream instead of latest current version, add the stream option to this command followed by the version; for example: -Dstream=3.2 Note Updates of multi-module project may show a lot of errors, because the update tool fails to update modules with <packaging>pom</packaging> . If such modules are present (typically containing versions), update them manually. Analyze the update command output for potential instructions and perform the suggested tasks if needed. Use a diff tool to inspect all changes. Review the migration guide for items that were not updated by the update command. If your project has such items, implement the additional steps advised in these topics. Ensure the project builds without errors, all tests pass, and the application functions as required before deploying to production. Before deploying your updated Quarkus application to production, ensure the following: The project builds without errors. All tests pass. The application functions as required. | [
"mvn io.quarkus.platform:quarkus-maven-plugin:3.8.6:update -N",
"mvn io.quarkus.platform:quarkus-maven-plugin:3.8.6:update",
"find . -type f -name \"pom.xml\" -execdir sh -c 'mvn io.quarkus.platform:quarkus-maven-plugin:3.8.6:update -N' \\;"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/migrating_camel_quarkus_projects |
Chapter 3. OpenID Connect authorization code flow mechanism for protecting web applications | Chapter 3. OpenID Connect authorization code flow mechanism for protecting web applications To protect your web applications, you can use the industry-standard OpenID Connect (OIDC) Authorization Code Flow mechanism provided by the Quarkus OIDC extension. 3.1. Overview of the OIDC authorization code flow mechanism The Quarkus OpenID Connect (OIDC) extension can protect application HTTP endpoints by using the OIDC Authorization Code Flow mechanism supported by OIDC-compliant authorization servers, such as Keycloak . The Authorization Code Flow mechanism authenticates users of your web application by redirecting them to an OIDC provider, such as Keycloak, to log in. After authentication, the OIDC provider redirects the user back to the application with an authorization code that confirms that authentication was successful. Then, the application exchanges this code with the OIDC provider for an ID token (which represents the authenticated user), an access token, and a refresh token to authorize the user's access to the application. The following diagram outlines the Authorization Code Flow mechanism in Quarkus. Figure 3.1. Authorization code flow mechanism in Quarkus The Quarkus user requests access to a Quarkus web-app application. The Quarkus web-app redirects the user to the authorization endpoint, that is, the OIDC provider for authentication. The OIDC provider redirects the user to a login and authentication prompt. At the prompt, the user enters their user credentials. The OIDC provider authenticates the user credentials entered and, if successful, issues an authorization code and redirects the user back to the Quarkus web-app with the code included as a query parameter. The Quarkus web-app exchanges this authorization code with the OIDC provider for ID, access, and refresh tokens. The authorization code flow is completed and the Quarkus web-app uses the tokens issued to access information about the user and grants the relevant role-based authorization to that user. The following tokens are issued: ID token: The Quarkus web-app application uses the user information in the ID token to enable the authenticated user to log in securely and to provide role-based access to the web application. Access token: The Quarkus web-app might use the access token to access the UserInfo API to get additional information about the authenticated user or to propagate it to another endpoint. Refresh token: (Optional) If the ID and access tokens expire, the Quarkus web-app can use the refresh token to get new ID and access tokens. See also the OIDC configuration properties reference guide. To learn about how you can protect web applications by using the OIDC Authorization Code Flow mechanism, see Protect a web application by using OIDC authorization code flow . If you want to protect service applications by using OIDC Bearer token authentication, see OIDC Bearer token authentication . For information about how to support multiple tenants, see Using OpenID Connect Multi-Tenancy . 3.2. Using the authorization code flow mechanism 3.2.1. Configuring access to the OIDC provider endpoint The OIDC web-app application requires URLs of the OIDC provider's authorization, token, JsonWebKey (JWK) set, and possibly the UserInfo , introspection and end-session (RP-initiated logout) endpoints. By convention, they are discovered by adding a /.well-known/openid-configuration path to the configured quarkus.oidc.auth-server-url . Alternatively, if the discovery endpoint is not available, or you prefer to reduce the discovery endpoint round-trip, you can disable endpoint discovery and configure relative path values. For example: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false # Authorization endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/auth quarkus.oidc.authorization-path=/protocol/openid-connect/auth # Token endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token quarkus.oidc.token-path=/protocol/openid-connect/token # JWK set endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/certs quarkus.oidc.jwks-path=/protocol/openid-connect/certs # UserInfo endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/userinfo quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo # Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/token/introspect # End-session endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/logout quarkus.oidc.end-session-path=/protocol/openid-connect/logout Some OIDC providers support metadata discovery but do not return all the endpoint URL values required for the authorization code flow to complete or to support application functions, for example, user logout. To work around this limitation, you can configure the missing endpoint URL values locally, as outlined in the following example: # Metadata is auto-discovered but it does not return an end-session endpoint URL quarkus.oidc.auth-server-url=http://localhost:8180/oidcprovider/account # Configure the end-session URL locally. # It can be an absolute or relative (to 'quarkus.oidc.auth-server-url') address quarkus.oidc.end-session-path=logout You can use this same configuration to override a discovered endpoint URL if that URL does not work for the local Quarkus endpoint and a more specific value is required. For example, a provider that supports both global and application-specific end-session endpoints returns a global end-session URL such as http://localhost:8180/oidcprovider/account/global-logout . This URL will log the user out of all the applications into which the user is currently logged in. However, if the requirement is for the current application to log the user out of a specific application only, you can override the global end-session URL, by setting the quarkus.oidc.end-session-path=logout parameter. 3.2.2. OIDC provider client authentication OIDC providers typically require applications to be identified and authenticated when they interact with the OIDC endpoints. Quarkus OIDC, specifically the quarkus.oidc.runtime.OidcProviderClient class, authenticates to the OIDC provider when the authorization code must be exchanged for the ID, access, and refresh tokens, or when the ID and access tokens must be refreshed or introspected. Typically, client id and client secrets are defined for a given application when it enlists to the OIDC provider. All OIDC client authentication options are supported. For example: Example of client_secret_basic : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=mysecret Or: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret The following example shows the secret retrieved from a credentials provider : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app # This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.client-secret.provider.key=mysecret-key # This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc # Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.client-secret.provider.name=oidc-credentials-provider Example of client_secret_post quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=post Example of client_secret_jwt , where the signature algorithm is HS256: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow Example of client_secret_jwt , where the secret is retrieved from a credentials provider : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app # This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.jwt.secret-provider.key=mysecret-key # This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc # Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.jwt.secret-provider.name=oidc-credentials-provider Example of private_key_jwt with the PEM key inlined in application.properties, and where the signature algorithm is RS256 : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key=Base64-encoded private key representation Example of private_key_jwt with the PEM key file, and where the signature algorithm is RS256: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem Example of private_key_jwt with the keystore file, where the signature algorithm is RS256: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-store-file=keystore.jks quarkus.oidc.credentials.jwt.key-store-password=mypassword quarkus.oidc.credentials.jwt.key-password=mykeypassword # Private key alias inside the keystore quarkus.oidc.credentials.jwt.key-id=mykeyAlias Using client_secret_jwt or private_key_jwt authentication methods ensures that a client secret does not get sent to the OIDC provider, therefore avoiding the risk of a secret being intercepted by a 'man-in-the-middle' attack. 3.2.2.1. Additional JWT authentication options If client_secret_jwt , private_key_jwt , or an Apple post_jwt authentication methods are used, then you can customize the JWT signature algorithm, key identifier, audience, subject and issuer. For example: # private_key_jwt client authentication quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem # This is a token key identifier 'kid' header - set it if your OIDC provider requires it: # Note if the key is represented in a JSON Web Key (JWK) format with a `kid` property, then # using 'quarkus.oidc.credentials.jwt.token-key-id' is not necessary. quarkus.oidc.credentials.jwt.token-key-id=mykey # Use RS512 signature algorithm instead of the default RS256 quarkus.oidc.credentials.jwt.signature-algorithm=RS512 # The token endpoint URL is the default audience value, use the base address URL instead: quarkus.oidc.credentials.jwt.audience=USD{quarkus.oidc-client.auth-server-url} # custom subject instead of the client id: quarkus.oidc.credentials.jwt.subject=custom-subject # custom issuer instead of the client id: quarkus.oidc.credentials.jwt.issuer=custom-issuer 3.2.2.2. Apple POST JWT The Apple OIDC provider uses a client_secret_post method whereby a secret is a JWT produced with a private_key_jwt authentication method, but with the Apple account-specific issuer and subject claims. In Quarkus Security, quarkus-oidc supports a non-standard client_secret_post_jwt authentication method, which you can configure as follows: # Apple provider configuration sets a 'client_secret_post_jwt' authentication method quarkus.oidc.provider=apple quarkus.oidc.client-id=USD{apple.client-id} quarkus.oidc.credentials.jwt.key-file=ecPrivateKey.pem quarkus.oidc.credentials.jwt.token-key-id=USD{apple.key-id} # Apple provider configuration sets ES256 signature algorithm quarkus.oidc.credentials.jwt.subject=USD{apple.subject} quarkus.oidc.credentials.jwt.issuer=USD{apple.issuer} 3.2.2.3. mutual TLS (mTLS) Some OIDC providers might require that a client is authenticated as part of the mutual TLS authentication process. The following example shows how you can configure quarkus-oidc to support mTLS : quarkus.oidc.tls.verification=certificate-validation # Keystore configuration quarkus.oidc.tls.key-store-file=client-keystore.jks quarkus.oidc.tls.key-store-password=USD{key-store-password} # Add more keystore properties if needed: #quarkus.oidc.tls.key-store-alias=keyAlias #quarkus.oidc.tls.key-store-alias-password=keyAliasPassword # Truststore configuration quarkus.oidc.tls.trust-store-file=client-truststore.jks quarkus.oidc.tls.trust-store-password=USD{trust-store-password} # Add more truststore properties if needed: #quarkus.oidc.tls.trust-store-alias=certAlias 3.2.2.4. POST query Some providers, such as the Strava OAuth2 provider , require client credentials be posted as HTTP POST query parameters: quarkus.oidc.provider=strava quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=query 3.2.2.5. Introspection endpoint authentication Some OIDC providers require authentication to its introspection endpoint by using Basic authentication and with credentials that are different from the client_id and client_secret . If you have previously configured security authentication to support either the client_secret_basic or client_secret_post client authentication methods as described in the OIDC provider client authentication section, you might need to apply the additional configuration as follows. If the tokens have to be introspected and the introspection endpoint-specific authentication mechanism is required, you can configure quarkus-oidc as follows: quarkus.oidc.introspection-credentials.name=introspection-user-name quarkus.oidc.introspection-credentials.secret=introspection-user-secret 3.2.3. OIDC request filters You can filter OIDC requests made by Quarkus to the OIDC provider by registering one or more OidcRequestFilter implementations, which can update or add new request headers and can also log requests. For example: package io.quarkus.it.keycloak; import io.quarkus.oidc.OidcConfigurationMetadata; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable public class OidcTokenRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { OidcConfigurationMetadata metadata = contextProps.get(OidcConfigurationMetadata.class.getName()); 1 // Metadata URI is absolute, request URI value is relative if (metadata.getTokenUri().endsWith(request.uri())) { 2 request.putHeader("TokenGrantDigest", calculateDigest(buffer.toString())); } } private String calculateDigest(String bodyString) { // Apply the required digest algorithm to the body string } } 1 Get OidcConfigurationMetadata , which contains all supported OIDC endpoint addresses. 2 Use OidcConfigurationMetadata to filter requests to the OIDC token endpoint only. Alternatively, you can use an @OidcEndpoint annotation to apply this filter to the token endpoint requests only: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcEndpoint; import io.quarkus.oidc.common.OidcEndpoint.Type; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable @OidcEndpoint(value = Type.DISCOVERY) 1 public class OidcDiscoveryRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { request.putHeader("Discovery", "OK"); } } 1 Restrict this filter to requests targeting the OIDC discovery endpoint only. 3.2.4. Redirecting to and from the OIDC provider When a user is redirected to the OIDC provider to authenticate, the redirect URL includes a redirect_uri query parameter, which indicates to the provider where the user has to be redirected to when the authentication is complete. In our case, this is the Quarkus application. Quarkus sets this parameter to the current application request URL by default. For example, if a user is trying to access a Quarkus service endpoint at http://localhost:8080/service/1 , then the redirect_uri parameter is set to http://localhost:8080/service/1 . Similarly, if the request URL is http://localhost:8080/service/2 , then the redirect_uri parameter is set to http://localhost:8080/service/2 . Some OIDC providers require the redirect_uri to have the same value for a given application, for example, http://localhost:8080/service/callback , for all the redirect URLs. In such cases, a quarkus.oidc.authentication.redirect-path property has to be set. For example, quarkus.oidc.authentication.redirect-path=/service/callback , and Quarkus will set the redirect_uri parameter to an absolute URL such as http://localhost:8080/service/callback , which will be the same regardless of the current request URL. If quarkus.oidc.authentication.redirect-path is set, but you need the original request URL to be restored after the user is redirected back to a unique callback URL, for example, http://localhost:8080/service/callback , set quarkus.oidc.authentication.restore-path-after-redirect property to true . This will restore the request URL such as http://localhost:8080/service/1 . 3.2.4.1. Customizing authentication requests By default, only the response_type (set to code ), scope (set to openid ), client_id , redirect_uri , and state properties are passed as HTTP query parameters to the OIDC provider's authorization endpoint when the user is redirected to it to authenticate. You can add more properties to it with quarkus.oidc.authentication.extra-params . For example, some OIDC providers might choose to return the authorization code as part of the redirect URI's fragment, which would break the authentication process. The following example shows how you can work around this issue: quarkus.oidc.authentication.extra-params.response_mode=query See also the OIDC redirect filters section explaining how a custom OidcRedirectFilter can be used to customize OIDC redirects, including those to the OIDC authorization endpoint. 3.2.4.2. Customizing the authentication error response When the user is redirected to the OIDC authorization endpoint to authenticate and, if necessary, authorize the Quarkus application, this redirect request might fail, for example, when an invalid scope is included in the redirect URI. In such cases, the provider redirects the user back to Quarkus with error and error_description parameters instead of the expected code parameter. For example, this can happen when an invalid scope or other invalid parameters are included in the redirect to the provider. In such cases, an HTTP 401 error is returned by default. However, you can request that a custom public error endpoint be called to return a more user-friendly HTML error page. To do this, set the quarkus.oidc.authentication.error-path property, as shown in the following example: quarkus.oidc.authentication.error-path=/error Ensure that the property starts with a forward slash (/) character and the path is relative to the base URI of the current endpoint. For example, if it is set to '/error' and the current request URI is https://localhost:8080/callback?error=invalid_scope , then a final redirect is made to https://localhost:8080/error?error=invalid_scope . Important To prevent the user from being redirected to this page to be re-authenticated, ensure that this error endpoint is a public resource. 3.2.5. OIDC redirect filters You can register one or more io.quarkus.oidc.OidcRedirectFilter implementations to filter OIDC redirects to OIDC authorization and logout endpoints but also local redirects to custom error and session expired pages. Custom OidcRedirectFilter can add additional query parameters, response headers and set new cookies. For example, the following simple custom OidcRedirectFilter adds an additional query parameter and a custom response header for all redirect requests that can be done by Quarkus OIDC: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.OidcRedirectFilter; @ApplicationScoped @Unremovable public class GlobalOidcRedirectFilter implements OidcRedirectFilter { @Override public void filter(OidcRedirectContext context) { if (context.redirectUri().contains("/session-expired-page")) { context.additionalQueryParams().add("redirect-filtered", "true,"); 1 context.routingContext().response().putHeader("Redirect-Filtered", "true"); 2 } } } 1 Add an additional query parameter. Note the queury names and values are URL-encoded by Quarkus OIDC, a redirect-filtered=true%20C query parameter is added to the redirect URI in this case. 2 Add a custom HTTP response header. See also the Customizing authentication requests section how to configure additional query parameters for OIDC authorization point. Custom OidcRedirectFilter for local error and session expired pages can also create secure cookies to help with generating such pages. For example, let's assume you need to redirect the current user whose session has expired to a custom session expired page available at http://localhost:8080/session-expired-page . The following custom OidcRedirectFilter encrypts the user name in a custom session_expired cookie using an OIDC tenant client secret: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import org.eclipse.microprofile.jwt.Claims; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.AuthorizationCodeTokens; import io.quarkus.oidc.OidcRedirectFilter; import io.quarkus.oidc.Redirect; import io.quarkus.oidc.Redirect.Location; import io.quarkus.oidc.TenantFeature; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.jwt.build.Jwt; @ApplicationScoped @Unremovable @TenantFeature("tenant-refresh") @Redirect(Location.SESSION_EXPIRED_PAGE) 1 public class SessionExpiredOidcRedirectFilter implements OidcRedirectFilter { @Override public void filter(OidcRedirectContext context) { if (context.redirectUri().contains("/session-expired-page")) { AuthorizationCodeTokens tokens = context.routingContext().get(AuthorizationCodeTokens.class.getName()); 2 String userName = OidcUtils.decodeJwtContent(tokens.getIdToken()).getString(Claims.preferred_username.name()); 3 String jwe = Jwt.preferredUserName(userName).jwe() .encryptWithSecret(context.oidcTenantConfig().credentials.secret.get()); 4 OidcUtils.createCookie(context.routingContext(), context.oidcTenantConfig(), "session_expired", jwe + "|" + context.oidcTenantConfig().tenantId.get(), 10); 5 } } } 1 Make sure this redirect filter is only called during a redirect to the session expired page. 2 Access AuthorizationCodeTokens tokens associated with the now expired session as a RoutingContext attribute. 3 Decode ID token claims and get a user name. 4 Save the user name in a JWT token encrypted with the current OIDC tenant's client secret. 5 Create a custom session_expired cookie valid for 5 seconds which joins the encrypted token and a tenant id using a "|" separator. Recording a tenant id in a custom cookie can help to generate correct session expired pages in a multi-tenant OIDC setup. , a public JAX-RS resource which generates session expired pages can use this cookie to create a page tailored for this user and the corresponding OIDC tenant, for example: package io.quarkus.it.keycloak; import jakarta.inject.Inject; import jakarta.ws.rs.CookieParam; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.runtime.OidcUtils; import io.quarkus.oidc.runtime.TenantConfigBean; import io.smallrye.jwt.auth.principal.DefaultJWTParser; import io.vertx.ext.web.RoutingContext; @Path("/session-expired-page") public class SessionExpiredResource { @Inject RoutingContext context; @Inject TenantConfigBean tenantConfig; 1 @GET public String sessionExpired(@CookieParam("session_expired") String sessionExpired) throws Exception { // Cookie format: jwt|<tenant id> String[] pair = sessionExpired.split("\\|"); 2 OidcTenantConfig oidcConfig = tenantConfig.getStaticTenantsConfig().get(pair[1]).getOidcTenantConfig(); 3 JsonWebToken jwt = new DefaultJWTParser().decrypt(pair[0], oidcConfig.credentials.secret.get()); 4 OidcUtils.removeCookie(context, oidcConfig, "session_expired"); 5 return jwt.getClaim(Claims.preferred_username) + ", your session has expired. " + "Please login again at http://localhost:8081/" + oidcConfig.tenantId.get(); 6 } } 1 Inject TenantConfigBean which can be used to access all the current OIDC tenant configurations. 2 Split the custom cookie value into 2 parts, first part is the encrypted token, last part is the tenant id. 3 Get the OIDC tenant configuration. 4 Decrypt the cookie value using the OIDC tenant's client secret. 5 Remove the custom cookie. 6 Use the username in the decrypted token and the tenant id to generate the service expired page response. 3.2.6. Accessing authorization data You can access information about authorization in different ways. 3.2.6.1. Accessing ID and access tokens The OIDC code authentication mechanism acquires three tokens during the authorization code flow: ID token , access token, and refresh token. The ID token is always a JWT token and represents a user authentication with the JWT claims. You can use this to get the issuing OIDC endpoint, the username, and other information called claims . You can access ID token claims by injecting JsonWebToken with an IdToken qualifier: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.security.Authenticated; @Path("/web-app") @Authenticated public class ProtectedResource { @Inject @IdToken JsonWebToken idToken; @GET public String getUserName() { return idToken.getName(); } } The OIDC web-app application usually uses the access token to access other endpoints on behalf of the currently logged-in user. You can access the raw access token as follows: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.AccessTokenCredential; import io.quarkus.security.Authenticated; @Path("/web-app") @Authenticated public class ProtectedResource { @Inject JsonWebToken accessToken; // or // @Inject // AccessTokenCredential accessTokenCredential; @GET public String getReservationOnBehalfOfUser() { String rawAccessToken = accessToken.getRawToken(); //or //String rawAccessToken = accessTokenCredential.getToken(); // Use the raw access token to access a remote endpoint. // For example, use RestClient to set this token as a `Bearer` scheme value of the HTTP `Authorization` header: // `Authorization: Bearer rawAccessToken`. return getReservationfromRemoteEndpoint(rawAccesstoken); } } Note When an authorization code flow access token is injected as JsonWebToken , its verification is automatically enabled, in addition to the mandatory ID token verification. If really needed, you can disable this code flow access token verification with quarkus.oidc.authentication.verify-access-token=false . Note AccessTokenCredential is used if the access token issued to the Quarkus web-app application is opaque (binary) and cannot be parsed to a JsonWebToken or if the inner content is necessary for the application. Injection of the JsonWebToken and AccessTokenCredential is supported in both @RequestScoped and @ApplicationScoped contexts. Quarkus OIDC uses the refresh token to refresh the current ID and access tokens as part of its session management process. 3.2.6.2. User info If the ID token does not provide enough information about the currently authenticated user, you can get more information from the UserInfo endpoint. Set the quarkus.oidc.authentication.user-info-required=true property to request a UserInfo JSON object from the OIDC UserInfo endpoint. A request is sent to the OIDC provider UserInfo endpoint by using the access token returned with the authorization code grant response, and an io.quarkus.oidc.UserInfo (a simple jakarta.json.JsonObject wrapper) object is created. io.quarkus.oidc.UserInfo can be injected or accessed as a SecurityIdentity userinfo attribute. quarkus.oidc.authentication.user-info-required is automatically enabled if one of these conditions is met: if quarkus.oidc.roles.source is set to userinfo or quarkus.oidc.token.verify-access-token-with-user-info is set to true or quarkus.oidc.authentication.id-token-required is set to false , the current OIDC tenant must support a UserInfo endpoint in these cases. if io.quarkus.oidc.UserInfo injection point is detected but only if the current OIDC tenant supports a UserInfo endpoint. 3.2.6.3. Accessing the OIDC configuration information The current tenant's discovered OpenID Connect configuration metadata is represented by io.quarkus.oidc.OidcConfigurationMetadata and can be injected or accessed as a SecurityIdentity configuration-metadata attribute. The default tenant's OidcConfigurationMetadata is injected if the endpoint is public. 3.2.6.4. Mapping token claims and SecurityIdentity roles The way the roles are mapped to the SecurityIdentity roles from the verified tokens is identical to how it is done for the Bearer tokens . The only difference is that ID token is used as a source of the roles by default. Note If you use Keycloak, set a microprofile-jwt client scope for the ID token to contain a groups claim. For more information, see the Keycloak server administration guide . However, depending on your OIDC provider, roles might be stored in the access token or the user info. If the access token contains the roles and this access token is not meant to be propagated to the downstream endpoints, then set quarkus.oidc.roles.source=accesstoken . If UserInfo is the source of the roles, then set quarkus.oidc.roles.source=userinfo , and if needed, quarkus.oidc.roles.role-claim-path . Additionally, you can also use a custom SecurityIdentityAugmentor to add the roles. For more information, see SecurityIdentity customization . You can also map SecurityIdentity roles created from token claims to deployment-specific roles with the HTTP Security policy . 3.2.7. Ensuring validity of tokens and authentication data A core part of the authentication process is ensuring the chain of trust and validity of the information. This is done by ensuring tokens can be trusted. 3.2.7.1. Token verification and introspection The verification process of OIDC authorization code flow tokens follows the Bearer token authentication token verification and introspection logic. For more information, see the Token verification and introspection section of the "Quarkus OpenID Connect (OIDC) Bearer token authentication" guide. Note With Quarkus web-app applications, only the IdToken is verified by default because the access token is not used to access the current Quarkus web-app endpoint and is intended to be propagated to the services expecting this access token. If you expect the access token to contain the roles required to access the current Quarkus endpoint ( quarkus.oidc.roles.source=accesstoken ), then it will also be verified. 3.2.7.2. Token introspection and UserInfo cache Code flow access tokens are not introspected unless they are expected to be the source of roles. However, they will be used to get UserInfo . There will be one or two remote calls with the code flow access token if the token introspection, UserInfo , or both are required. For more information about using the default token cache or registering a custom cache implementation, see Token introspection and UserInfo cache . 3.2.7.3. JSON web token claim verification For information about the claim verification, including the iss (issuer) claim, see the JSON Web Token claim verification section. It applies to ID tokens and also to access tokens in a JWT format, if the web-app application has requested the access token verification. 3.2.7.4. Jose4j Validator You can register a custom [Jose4j Validator] to customize the JWT claim verification process. See Jose4j section for more information. 3.2.8. Proof Key for Code Exchange (PKCE) Proof Key for Code Exchange (PKCE) minimizes the risk of authorization code interception. While PKCE is of primary importance to public OIDC clients, such as SPA scripts running in a browser, it can also provide extra protection to Quarkus OIDC web-app applications. With PKCE, Quarkus OIDC web-app applications act as confidential OIDC clients that can securely store the client secret and use it to exchange the code for the tokens. You can enable PKCE for your OIDC web-app endpoint with a quarkus.oidc.authentication.pkce-required property and a 32-character secret that is required to encrypt the PKCE code verifier in the state cookie, as shown in the following example: quarkus.oidc.authentication.pkce-required=true quarkus.oidc.authentication.state-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU If you already have a 32-character client secret, you do not need to set the quarkus.oidc.authentication.pkce-secret property unless you prefer to use a different secret key. This secret will be auto-generated if it is not configured and if the fallback to the client secret is not possible in cases where the client secret is less than 16 characters long. The secret key is required to encrypt a randomly generated PKCE code_verifier while the user is redirected with the code_challenge query parameter to an OIDC provider to authenticate. The code_verifier is decrypted when the user is redirected back to Quarkus and sent to the token endpoint alongside the code , client secret, and other parameters to complete the code exchange. The provider will fail the code exchange if a SHA256 digest of the code_verifier does not match the code_challenge that was provided during the authentication request. 3.2.9. Handling and controlling the lifetime of authentication Another important requirement for authentication is to ensure that the data the session is based on is up-to-date without requiring the user to authenticate for every single request. There are also situations where a logout event is explicitly requested. Use the following key points to find the right balance for securing your Quarkus applications: 3.2.9.1. Cookies The OIDC adapter uses cookies to keep the session, code flow, and post-logout state. This state is a key element controlling the lifetime of authentication data. Use the quarkus.oidc.authentication.cookie-path property to ensure that the same cookie is visible when you access protected resources with overlapping or different roots. For example: /index.html and /web-app/service /web-app/service1 and /web-app/service2 /web-app1/service and /web-app2/service By default, quarkus.oidc.authentication.cookie-path is set to / but you can change this to a more specific path if required, for example, /web-app . To set the cookie path dynamically, configure the quarkus.oidc.authentication.cookie-path-header property. For example, to set the cookie path dynamically by using the value of the X-Forwarded-Prefix HTTP header, configure the property to quarkus.oidc.authentication.cookie-path-header=X-Forwarded-Prefix . If quarkus.oidc.authentication.cookie-path-header is set but no configured HTTP header is available in the current request, then the quarkus.oidc.authentication.cookie-path will be checked. If your application is deployed across multiple domains, set the quarkus.oidc.authentication.cookie-domain property so that the session cookie is visible to all protected Quarkus services. For example, if you have Quarkus services deployed on the following two domains, then you must set the quarkus.oidc.authentication.cookie-domain property to company.net : https://whatever.wherever.company.net/ https://another.address.company.net/ 3.2.9.2. State cookies State cookies are used to support authorization code flow completion. When an authorization code flow is started, Quarkus creates a state cookie and a matching state query parameter, before redirecting the user to the OIDC provider. When the user is redirected back to Quarkus to complete the authorization code flow, Quarkus expects that the request URI must contain the state query parameter and it must match the current state cookie value. The default state cookie age is 5 mins and you can change it with a quarkus.oidc.authentication.state-cookie-age Duration property. Quarkus creates a unique state cookie name every time a new authorization code flow is started to support multi-tab authentication. Many concurrent authentication requests on behalf of the same user may cause a lot of state cookies be created. If you do not want to allow your users use multiple browser tabs to authenticate then it is recommended to disable it with quarkus.oidc.authentication.allow-multiple-code-flows=false . It also ensures that the same state cookie name is created for every new user authentication. 3.2.9.3. Session cookie and default TokenStateManager OIDC CodeAuthenticationMechanism uses the default io.quarkus.oidc.TokenStateManager interface implementation to keep the ID, access, and refresh tokens returned in the authorization code or refresh grant responses in an encrypted session cookie. It makes Quarkus OIDC endpoints completely stateless and it is recommended to follow this strategy to achieve the best scalability results. See the Session cookie and custom TokenStateManager section for alternative methods of token storage. This is ideal for those seeking customized solutions for token state management, especially when standard server-side storage does not meet your specific requirements. You can configure the default TokenStateManager to avoid saving an access token in the session cookie and to only keep ID and refresh tokens or a single ID token only. An access token is only required if the endpoint needs to do the following actions: Retrieve UserInfo Access the downstream service with this access token Use the roles associated with the access token, which are checked by default In such cases, use the quarkus.oidc.token-state-manager.strategy property to configure the token state strategy as follows: To... Set the property to ... Keep the ID and refresh tokens only quarkus.oidc.token-state-manager.strategy=id-refresh-tokens Keep the ID token only quarkus.oidc.token-state-manager.strategy=id-token If your chosen session cookie strategy combines tokens and generates a large session cookie value that is greater than 4KB, some browsers might not be able to handle such cookie sizes. This can occur when the ID, access, and refresh tokens are JWT tokens and the selected strategy is keep-all-tokens or with ID and refresh tokens when the strategy is id-refresh-token . To work around this issue, you can set quarkus.oidc.token-state-manager.split-tokens=true to create a unique session token for each token. The default TokenStateManager encrypts the tokens before storing them in the session cookie. The following example shows how you configure it to split the tokens and encrypt them: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.token-state-manager.split-tokens=true quarkus.oidc.token-state-manager.encryption-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU The token encryption secret must be at least 32 characters long. If this key is not configured, then either quarkus.oidc.credentials.secret or quarkus.oidc.credentials.jwt.secret will be hashed to create an encryption key. Configure the quarkus.oidc.token-state-manager.encryption-secret property if Quarkus authenticates to the OIDC provider by using one of the following authentication methods: mTLS private_key_jwt , where a private RSA or EC key is used to sign a JWT token Otherwise, a random key is generated, which can be problematic if the Quarkus application is running in the cloud with multiple pods managing the requests. You can disable token encryption in the session cookie by setting quarkus.oidc.token-state-manager.encryption-required=false . 3.2.9.4. Session cookie and custom TokenStateManager If you want to customize the way the tokens are associated with the session cookie, register a custom io.quarkus.oidc.TokenStateManager implementation as an @ApplicationScoped CDI bean. For example, you might want to keep the tokens in a cache cluster and have only a key stored in a session cookie. Note that this approach might introduce some challenges if you need to make the tokens available across multiple microservices nodes. Here is a simple example: package io.quarkus.oidc.test; import jakarta.annotation.Priority; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Alternative; import jakarta.inject.Inject; import io.quarkus.oidc.AuthorizationCodeTokens; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TokenStateManager; import io.quarkus.oidc.runtime.DefaultTokenStateManager; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped @Alternative @Priority(1) public class CustomTokenStateManager implements TokenStateManager { @Inject DefaultTokenStateManager tokenStateManager; @Override public Uni<String> createTokenState(RoutingContext routingContext, OidcTenantConfig oidcConfig, AuthorizationCodeTokens sessionContent, OidcRequestContext<String> requestContext) { return tokenStateManager.createTokenState(routingContext, oidcConfig, sessionContent, requestContext) .map(t -> (t + "|custom")); } @Override public Uni<AuthorizationCodeTokens> getTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, OidcRequestContext<AuthorizationCodeTokens> requestContext) { if (!tokenState.endsWith("|custom")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.getTokens(routingContext, oidcConfig, defaultState, requestContext); } @Override public Uni<Void> deleteTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, OidcRequestContext<Void> requestContext) { if (!tokenState.endsWith("|custom")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.deleteTokens(routingContext, oidcConfig, defaultState, requestContext); } } For information about the default TokenStateManager storing tokens in an encrypted session cookie, see Session cookie and default TokenStateManager . 3.2.10. Logout and expiration There are two main ways for the authentication information to expire: the tokens expired and were not renewed or an explicit logout operation was triggered. Let's start with explicit logout operations. 3.2.10.1. User-initiated logout Users can request a logout by sending a request to the Quarkus endpoint logout path set with a quarkus.oidc.logout.path property. For example, if the endpoint address is https://application.com/webapp and the quarkus.oidc.logout.path is set to /logout , then the logout request must be sent to https://application.com/webapp/logout . This logout request starts an RP-initiated logout . The user will be redirected to the OIDC provider to log out, where they can be asked to confirm the logout is indeed intended. The user will be returned to the endpoint post-logout page once the logout has been completed and if the quarkus.oidc.logout.post-logout-path property is set. For example, if the endpoint address is https://application.com/webapp and the quarkus.oidc.logout.post-logout-path is set to /signin , then the user will be returned to https://application.com/webapp/signin . Note, this URI must be registered as a valid post_logout_redirect_uri in the OIDC provider. If the quarkus.oidc.logout.post-logout-path is set, then a q_post_logout cookie will be created and a matching state query parameter will be added to the logout redirect URI and the OIDC provider will return this state once the logout has been completed. It is recommended for the Quarkus web-app applications to check that a state query parameter matches the value of the q_post_logout cookie, which can be done, for example, in a Jakarta REST filter. Note that a cookie name varies when using OpenID Connect Multi-Tenancy . For example, it will be named q_post_logout_tenant_1 for a tenant with a tenant_1 ID, and so on. Here is an example of how to configure a Quarkus application to initiate a logout flow: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.path=/logout # Logged-out users should be returned to the /welcome.html site which will offer an option to re-login: quarkus.oidc.logout.post-logout-path=/welcome.html # Only the authenticated users can initiate a logout: quarkus.http.auth.permission.authenticated.paths=/logout quarkus.http.auth.permission.authenticated.policy=authenticated # All users can see the Welcome page: quarkus.http.auth.permission.public.paths=/welcome.html quarkus.http.auth.permission.public.policy=permit You might also want to set quarkus.oidc.authentication.cookie-path to a path value common to all the application resources, which is / in this example. For more information, see the Cookies section. Note Some OIDC providers do not support a RP-initiated logout specification and do not return an OpenID Connect well-known end_session_endpoint metadata property. However, this is not a problem for Quarkus because the specific logout mechanisms of such OIDC providers only differ in how the logout URL query parameters are named. According to the RP-initiated logout specification, the quarkus.oidc.logout.post-logout-path property is represented as a post_logout_redirect_uri query parameter, which is not recognized by the providers that do not support this specification. You can use quarkus.oidc.logout.post-logout-url-param to work around this issue. You can also request more logout query parameters added with quarkus.oidc.logout.extra-params . For example, here is how you can support a logout with Auth0 : quarkus.oidc.auth-server-url=https://dev-xxx.us.auth0.com quarkus.oidc.client-id=redacted quarkus.oidc.credentials.secret=redacted quarkus.oidc.application-type=web-app quarkus.oidc.tenant-logout.logout.path=/logout quarkus.oidc.tenant-logout.logout.post-logout-path=/welcome.html # Auth0 does not return the `end_session_endpoint` metadata property. Instead, you must configure it: quarkus.oidc.end-session-path=v2/logout # Auth0 will not recognize the 'post_logout_redirect_uri' query parameter so ensure it is named as 'returnTo': quarkus.oidc.logout.post-logout-uri-param=returnTo # Set more properties if needed. # For example, if 'client_id' is provided, then a valid logout URI should be set as the Auth0 Application property, without it - as Auth0 Tenant property: quarkus.oidc.logout.extra-params.client_id=USD{quarkus.oidc.client-id} 3.2.10.2. Back-channel logout The OIDC provider can force the logout of all applications by using the authentication data. This is known as back-channel logout. In this case, the OIDC will call a specific URL from each application to trigger that logout. OIDC providers use Back-channel logout to log out the current user from all the applications into which this user is currently logged in, bypassing the user agent. You can configure Quarkus to support Back-channel logout as follows: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.backchannel.path=/back-channel-logout The absolute back-channel logout URL is calculated by adding quarkus.oidc.back-channel-logout.path to the current endpoint URL, for example, http://localhost:8080/back-channel-logout . You will need to configure this URL in the admin console of your OIDC provider. You will also need to configure a token age property for the logout token verification to succeed if your OIDC provider does not set an expiry claim in the current logout token. For example, set quarkus.oidc.token.age=10S to ensure that no more than 10 seconds elapse since the logout token's iat (issued at) time. 3.2.10.3. Front-channel logout You can use Front-channel logout to log out the current user directly from the user agent, for example, its browser. It is similar to Back-channel logout but the logout steps are executed by the user agent, such as the browser, and not in the background by the OIDC provider. This option is rarely used. You can configure Quarkus to support Front-channel logout as follows: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.frontchannel.path=/front-channel-logout This path will be compared to the current request's path, and the user will be logged out if these paths match. 3.2.10.4. Local logout User-initiated logout will log the user out of the OIDC provider. If it is used as single sign-on, it might not be what you require. If, for example, your OIDC provider is Google, you will be logged out from Google and its services. Instead, the user might just want to log out of that specific application. Another use case might be when the OIDC provider does not have a logout endpoint. By using OidcSession , you can support a local logout, which means that only the local session cookie is cleared, as shown in the following example: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.OidcSession; @Path("/service") public class ServiceResource { @Inject OidcSession oidcSession; @GET @Path("logout") public String logout() { oidcSession.logout().await().indefinitely(); return "You are logged out"; } } 3.2.10.5. Using OidcSession for local logout io.quarkus.oidc.OidcSession is a wrapper around the current IdToken , which can help to perform a Local logout , retrieve the current session's tenant identifier, and check when the session will expire. More useful methods will be added to it over time. 3.2.10.6. Session management By default, logout is based on the expiration time of the ID token issued by the OIDC provider. When the ID token expires, the current user session at the Quarkus endpoint is invalidated, and the user is redirected to the OIDC provider again to authenticate. If the session at the OIDC provider is still active, users are automatically re-authenticated without needing to provide their credentials again. The current user session can be automatically extended by enabling the quarkus.oidc.token.refresh-expired property. If set to true , when the current ID token expires, a refresh token grant will be used to refresh the ID token as well as access and refresh tokens. If you work with a Quarkus OIDC web-app application, then the Quarkus OIDC code authentication mechanism manages the user session lifespan. To use the refresh token, you should carefully configure the session cookie age. The session age should be longer than the ID token lifespan and close to or equal to the refresh token lifespan. You calculate the session age by adding the lifespan value of the current ID token and the values of the quarkus.oidc.authentication.session-age-extension and quarkus.oidc.token.lifespan-grace properties. Tip You use only the quarkus.oidc.authentication.session-age-extension property to significantly extend the session lifespan, if required. You use the quarkus.oidc.token.lifespan-grace property only to consider some small clock skews. When the current authenticated user returns to the protected Quarkus endpoint and the ID token associated with the session cookie has expired, then, by default, the user is automatically redirected to the OIDC Authorization endpoint to re-authenticate. The OIDC provider might challenge the user again if the session between the user and this OIDC provider is still active, which might happen if the session is configured to last longer than the ID token. If the quarkus.oidc.token.refresh-expired is set to true , then the expired ID token (and the access token) is refreshed by using the refresh token returned with the initial authorization code grant response. This refresh token might also be recycled (refreshed) itself as part of this process. As a result, the new session cookie is created, and the session is extended. Note In instances where the user is not very active, you can use the quarkus.oidc.authentication.session-age-extension property to help handle expired ID tokens. If the ID token expires, the session cookie might not be returned to the Quarkus endpoint during the user request as the cookie lifespan would have elapsed. Quarkus assumes that this request is the first authentication request. Set quarkus.oidc.authentication.session-age-extension to be reasonably long for your barely-active users and in accordance with your security policies. You can go one step further and proactively refresh ID tokens or access tokens that are about to expire. Set quarkus.oidc.token.refresh-token-time-skew to the value you want to anticipate the refresh. If, during the current user request, it is calculated that the current ID token will expire within this quarkus.oidc.token.refresh-token-time-skew , then it is refreshed, and the new session cookie is created. This property should be set to a value that is less than the ID token lifespan; the closer it is to this lifespan value, the more often the ID token is refreshed. You can further optimize this process by having a simple JavaScript function ping your Quarkus endpoint periodically to emulate the user activity, which minimizes the time frame during which the user might have to be re-authenticated. Note When the session can not be refreshed, the currently authenticated user is redirected to the OIDC provider to re-authenticate. However, the user experience may not be ideal in such cases, if the user, after an earlier successful authentication, is suddently seeing an OIDC authentication challenge screen when trying to access an application page. Instead, you can request that the user is redirected to a public, application specific session expired page first. This page informs the user that the session has now expired and advise to re-authenticate by following a link to a secured application welcome page. The user clicks on the link and Quarkus OIDC enforces a redirect to the OIDC provider to re-authenticate. Use quarkus.oidc.authentication.session-expired-page relative path property, if you'd like to do it. For example, setting quarkus.oidc.authentication.session-expired-page=/session-expired-page will ensure that the user whose session has expired is redirected to http://localhost:8080/session-expired-page , assuming the application is available at http://localhost:8080 . See also the OIDC redirect filters section explaining how a custom OidcRedirectFilter can be used to customize OIDC redirects, including those to the session expired pages. Note You cannot extend the user session indefinitely. The returning user with the expired ID token will have to re-authenticate at the OIDC provider endpoint once the refresh token has expired. 3.2.11. Integration with GitHub and non-OIDC OAuth2 providers Some well-known providers such as GitHub or LinkedIn are not OpenID Connect providers, but OAuth2 providers that support the authorization code flow . For example, GitHub OAuth2 and LinkedIn OAuth2 . Remember, OIDC is built on top of OAuth2. The main difference between OIDC and OAuth2 providers is that OIDC providers return an ID Token that represents a user authentication, in addition to the standard authorization code flow access and refresh tokens returned by OAuth2 providers. OAuth2 providers such as GitHub do not return IdToken , and the user authentication is implicit and indirectly represented by the access token. This access token represents an authenticated user authorizing the current Quarkus web-app application to access some data on behalf of the authenticated user. For OIDC, you validate the ID token as proof of authentication validity whereas in the case of OAuth2, you validate the access token. This is done by subsequently calling an endpoint that requires the access token and that typically returns user information. This approach is similar to the OIDC UserInfo approach, with UserInfo fetched by Quarkus OIDC on your behalf. For example, when working with GitHub, the Quarkus endpoint can acquire an access token, which allows the Quarkus endpoint to request a GitHub profile for the current user. To support the integration with such OAuth2 servers, quarkus-oidc needs to be configured a bit differently to allow the authorization code flow responses without IdToken : quarkus.oidc.authentication.id-token-required=false . Note Even though you configure the extension to support the authorization code flows without IdToken , an internal IdToken is generated to standardize the way quarkus-oidc operates. You use an internal IdToken to support the authentication session and to avoid redirecting the user to the provider, such as GitHub, on every request. In this case, the IdToken age is set to the value of a standard expires_in property in the authorization code flow response. You can use a quarkus.oidc.authentication.internal-id-token-lifespan property to customize the ID token age. The default ID token age is 5 minutes, which you can extend further as described in the session management section. This simplifies how you handle an application that supports multiple OIDC providers. The step is to ensure that the returned access token can be useful and is valid to the current Quarkus endpoint. The first way is to call the OAuth2 provider introspection endpoint by configuring quarkus.oidc.introspection-path , if the provider offers such an endpoint. In this case, you can use the access token as a source of roles using quarkus.oidc.roles.source=accesstoken . If no introspection endpoint is present, you can attempt instead to request UserInfo from the provider as it will at least validate the access token. To do so, specify quarkus.oidc.token.verify-access-token-with-user-info=true . You also need to set the quarkus.oidc.user-info-path property to a URL endpoint that fetches the user info (or to an endpoint protected by the access token). For GitHub, since it does not have an introspection endpoint, requesting the UserInfo is required. Note Requiring UserInfo involves making a remote call on every request. Therefore, UserInfo is embedded in the internal generated IdToken and saved in the encrypted session cookie. It can be disabled with quarkus.oidc.cache-user-info-in-idtoken=false . Alternatively, you might want to consider caching UserInfo using a default or custom UserInfo cache provider. For more information, see the Token Introspection and UserInfo cache section of the "OpenID Connect (OIDC) Bearer token authentication" guide. Most well-known social OAuth2 providers enforce rate-limiting so there is a high chance you will prefer to have UserInfo cached. OAuth2 servers might not support a well-known configuration endpoint. In this case, you must disable the discovery and configure the authorization, token, and introspection and UserInfo endpoint paths manually. For well-known OIDC or OAuth2 providers, such as Apple, Facebook, GitHub, Google, Microsoft, Spotify, and X (formerly Twitter), Quarkus can help significantly simplify your application's configuration with the quarkus.oidc.provider property. Here is how you can integrate quarkus-oidc with GitHub after you have created a GitHub OAuth application . Configure your Quarkus endpoint like this: quarkus.oidc.provider=github quarkus.oidc.client-id=github_app_clientid quarkus.oidc.credentials.secret=github_app_clientsecret # user:email scope is requested by default, use 'quarkus.oidc.authentication.scopes' to request different scopes such as `read:user`. # See https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps for more information. # Consider enabling UserInfo Cache # quarkus.oidc.token-cache.max-size=1000 # quarkus.oidc.token-cache.time-to-live=5M # # Or having UserInfo cached inside IdToken itself # quarkus.oidc.cache-user-info-in-idtoken=true For more information about configuring other well-known providers, see OpenID Connect providers . This is all that is needed for an endpoint like this one to return the currently-authenticated user's profile with GET http://localhost:8080/github/userinfo and access it as the individual UserInfo properties: package io.quarkus.it.keycloak; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; @Path("/github") @Authenticated public class TokenResource { @Inject UserInfo userInfo; @GET @Path("/userinfo") @Produces("application/json") public String getUserInfo() { return userInfo.getUserInfoString(); } } If you support more than one social provider with the help of OpenID Connect Multi-Tenancy , for example, Google, which is an OIDC provider that returns IdToken , and GitHub, which is an OAuth2 provider that does not return IdToken and only allows access to UserInfo , then you can have your endpoint working with only the injected SecurityIdentity for both Google and GitHub flows. A simple augmentation of SecurityIdentity will be required where a principal created with the internally-generated IdToken will be replaced with the UserInfo -based principal when the GitHub flow is active: package io.quarkus.it.keycloak; import java.security.Principal; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.UserInfo; import io.quarkus.security.identity.AuthenticationRequestContext; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.security.identity.SecurityIdentityAugmentor; import io.quarkus.security.runtime.QuarkusSecurityIdentity; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomSecurityIdentityAugmentor implements SecurityIdentityAugmentor { @Override public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) { RoutingContext routingContext = identity.getAttribute(RoutingContext.class.getName()); if (routingContext != null && routingContext.normalizedPath().endsWith("/github")) { QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); UserInfo userInfo = identity.getAttribute("userinfo"); builder.setPrincipal(new Principal() { @Override public String getName() { return userInfo.getString("preferred_username"); } }); identity = builder.build(); } return Uni.createFrom().item(identity); } } Now, the following code will work when the user signs into your application by using Google or GitHub: package io.quarkus.it.keycloak; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; @Path("/service") @Authenticated public class TokenResource { @Inject SecurityIdentity identity; @GET @Path("/google") @Produces("application/json") public String getGoogleUserName() { return identity.getPrincipal().getName(); } @GET @Path("/github") @Produces("application/json") public String getGitHubUserName() { return identity.getPrincipal().getName(); } } Possibly a simpler alternative is to inject both @IdToken JsonWebToken and UserInfo and use JsonWebToken when handling the providers that return IdToken and use UserInfo with the providers that do not return IdToken . You must ensure that the callback path you enter in the GitHub OAuth application configuration matches the endpoint path where you want the user to be redirected after a successful GitHub authentication and application authorization. In this case, it has to be set to http://localhost:8080/github/userinfo . 3.2.12. Listening to important authentication events You can register the @ApplicationScoped bean which will observe important OIDC authentication events. When a user logs in for the first time, re-authenticates, or refreshes the session, the listener is updated. In the future, more events might be reported. For example: import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import io.quarkus.oidc.SecurityEvent; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class SecurityEventListener { public void event(@Observes SecurityEvent event) { String tenantId = event.getSecurityIdentity().getAttribute("tenant-id"); RoutingContext vertxContext = event.getSecurityIdentity().getAttribute(RoutingContext.class.getName()); vertxContext.put("listener-message", String.format("event:%s,tenantId:%s", event.getEventType().name(), tenantId)); } } Tip You can listen to other security events as described in the Observe security events section of the Security Tips and Tricks guide. 3.2.13. Propagating tokens to downstream services For information about Authorization Code Flow access token propagation to downstream services, see the Token Propagation section. 3.3. Integration considerations Your application secured by OIDC integrates in an environment where it can be called from applications. It must work with well-known OIDC providers, run behind HTTP Reverse Proxy, require external and internal access, and so on. This section discusses these considerations. 3.3.1. applications If you prefer to use SPAs and JavaScript APIs such as Fetch or XMLHttpRequest (XHR) with Quarkus web applications, be aware that OIDC providers might not support cross-origin resource sharing (CORS) for authorization endpoints where the users are authenticated after a redirect from Quarkus. This will lead to authentication failures if the Quarkus application and the OIDC provider are hosted on different HTTP domains, ports, or both. In such cases, set the quarkus.oidc.authentication.java-script-auto-redirect property to false , which will instruct Quarkus to return a 499 status code and a WWW-Authenticate header with the OIDC value. The browser script must set a header to identify the current request as a JavaScript request for a 499 status code to be returned when the quarkus.oidc.authentication.java-script-auto-redirect property is set to false . If the script engine sets an engine-specific request header itself, then you can register a custom quarkus.oidc.JavaScriptRequestChecker bean, which will inform Quarkus if the current request is a JavaScript request. For example, if the JavaScript engine sets a header such as HX-Request: true , then you can have it checked like this: import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.JavaScriptRequestChecker; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomJavaScriptRequestChecker implements JavaScriptRequestChecker { @Override public boolean isJavaScriptRequest(RoutingContext context) { return "true".equals(context.request().getHeader("HX-Request")); } } and reload the last requested page in case of a 499 status code. Otherwise, you must also update the browser script to set the X-Requested-With header with the JavaScript value and reload the last requested page in case of a 499 status code. For example: Future<void> callQuarkusService() async { Map<String, String> headers = Map.fromEntries([MapEntry("X-Requested-With", "JavaScript")]); await http .get("https://localhost:443/serviceCall") .then((response) { if (response.statusCode == 499) { window.location.assign("https://localhost.com:443/serviceCall"); } }); } 3.3.2. Cross-origin resource sharing If you plan to consume this application from a application running on a different domain, you need to configure cross-origin resource sharing (CORS). For more information, see the CORS filter section of the "Cross-origin resource sharing" guide. 3.3.3. Running Quarkus application behind a reverse proxy The OIDC authentication mechanism can be affected if your Quarkus application is running behind a reverse proxy, gateway, or firewall when HTTP Host header might be reset to the internal IP address and HTTPS connection might be terminated, and so on. For example, an authorization code flow redirect_uri parameter might be set to the internal host instead of the expected external one. In such cases, configuring Quarkus to recognize the original headers forwarded by the proxy will be required. For more information, see the Running behind a reverse proxy Vert.x documentation section. For example, if your Quarkus endpoint runs in a cluster behind Kubernetes Ingress, then a redirect from the OIDC provider back to this endpoint might not work because the calculated redirect_uri parameter might point to the internal endpoint address. You can resolve this problem by using the following configuration, where X-ORIGINAL-HOST is set by Kubernetes Ingress to represent the external endpoint address.: quarkus.http.proxy.proxy-address-forwarding=true quarkus.http.proxy.allow-forwarded=false quarkus.http.proxy.enable-forwarded-host=true quarkus.http.proxy.forwarded-host-header=X-ORIGINAL-HOST quarkus.oidc.authentication.force-redirect-https-scheme property can also be used when the Quarkus application is running behind an SSL terminating reverse proxy. 3.3.4. External and internal access to the OIDC provider The OIDC provider externally-accessible authorization, logout, and other endpoints can have different HTTP(S) URLs compared to the URLs auto-discovered or configured relative to the quarkus.oidc.auth-server-url internal URL. In such cases, the endpoint might report an issuer verification failure and redirects to the externally-accessible OIDC provider endpoints might fail. If you work with Keycloak, then start it with a KEYCLOAK_FRONTEND_URL system property set to the externally-accessible base URL. If you work with other OIDC providers, check the documentation of your provider. 3.4. OIDC SAML identity broker If your identity provider does not implement OpenID Connect but only the legacy XML-based SAML2.0 SSO protocol, then Quarkus cannot be used as a SAML 2.0 adapter, similarly to how quarkus-oidc is used as an OIDC adapter. However, many OIDC providers such as Keycloak, Okta, Auth0, and Microsoft ADFS offer OIDC to SAML 2.0 bridges. You can create an identity broker connection to a SAML 2.0 provider in your OIDC provider and use quarkus-oidc to authenticate your users to this SAML 2.0 provider, with the OIDC provider coordinating OIDC and SAML 2.0 communications. As far as Quarkus endpoints are concerned, they can continue using the same Quarkus Security, OIDC API, annotations such as @Authenticated , SecurityIdentity , and so on. For example, assume Okta is your SAML 2.0 provider and Keycloak is your OIDC provider. Here is a typical sequence explaining how to configure Keycloak to broker with the Okta SAML 2.0 provider. First, create a new SAML2 integration in your Okta Dashboard/Applications : For example, name it as OktaSaml : , configure it to point to a Keycloak SAML broker endpoint. At this point, you need to know the name of the Keycloak realm, for example, quarkus , and, assuming that the Keycloak SAML broker alias is saml , enter the endpoint address as http://localhost:8081/realms/quarkus/broker/saml/endpoint . Enter the service provider (SP) entity ID as http://localhost:8081/realms/quarkus , where http://localhost:8081 is a Keycloak base address and saml is a broker alias: , save this SAML integration and note its Metadata URL: , add a SAML provider to Keycloak: First, as usual, create a new realm or import the existing realm to Keycloak . In this case, the realm name has to be quarkus . Now, in the quarkus realm properties, navigate to Identity Providers and add a new SAML provider: Note the alias is set to saml , Redirect URI is http://localhost:8081/realms/quarkus/broker/saml/endpoint and Service provider entity ID is http://localhost:8081/realms/quarkus - these are the same values you entered when creating the Okta SAML integration in the step. Finally, set Service entity descriptor to point to the Okta SAML Integration Metadata URL you noted at the end of the step. , if you want, you can register this Keycloak SAML provider as a default provider by navigating to Authentication/browser/Identity Provider Redirector config and setting both the Alias and Default Identity Provider properties to saml . If you do not configure it as a default provider then, at authentication time, Keycloak offers 2 options: Authenticate with the SAML provider Authenticate directly to Keycloak with the name and password Now, configure the Quarkus OIDC web-app application to point to the Keycloak quarkus realm, quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus . Then, you are ready to start authenticating your Quarkus users to the Okta SAML 2.0 provider by using an OIDC to SAML bridge that is provided by Keycloak OIDC and Okta SAML 2.0 providers. You can configure other OIDC providers to provide a SAML bridge similarly to how it can be done for Keycloak. 3.5. Testing Testing is often tricky when it comes to authentication to a separate OIDC-like server. Quarkus offers several options from mocking to a local run of an OIDC provider. Start by adding the following dependencies to your test project: Using Maven: <dependency> <groupId>org.htmlunit</groupId> <artifactId>htmlunit</artifactId> <exclusions> <exclusion> <groupId>org.eclipse.jetty</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("org.htmlunit:htmlunit") testImplementation("io.quarkus:quarkus-junit5") 3.5.1. Wiremock Add the following dependency: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-oidc-server</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.quarkus:quarkus-test-oidc-server") Prepare the REST test endpoints and set application.properties . For example: # keycloak.url is set by OidcWiremockTestResource quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/ quarkus.oidc.client-id=quarkus-web-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app Finally, write the test code, for example: import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.Test; import org.htmlunit.SilentCssErrorHandler; import org.htmlunit.WebClient; import org.htmlunit.html.HtmlForm; import org.htmlunit.html.HtmlPage; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWiremockTestResource; @QuarkusTest @QuarkusTestResource(OidcWiremockTestResource.class) public class CodeFlowAuthorizationTest { @Test public void testCodeFlow() throws Exception { try (final WebClient webClient = createWebClient()) { // the test REST endpoint listens on '/code-flow' HtmlPage page = webClient.getPage("http://localhost:8081/code-flow"); HtmlForm form = page.getFormByName("form"); // user 'alice' has the 'user' role form.getInputByName("username").type("alice"); form.getInputByName("password").type("alice"); page = form.getInputByValue("login").click(); assertEquals("alice", page.getBody().asNormalizedText()); } } private WebClient createWebClient() { WebClient webClient = new WebClient(); webClient.setCssErrorHandler(new SilentCssErrorHandler()); return webClient; } } OidcWiremockTestResource recognizes alice and admin users. The user alice has the user role only by default - it can be customized with a quarkus.test.oidc.token.user-roles system property. The user admin has the user and admin roles by default - it can be customized with a quarkus.test.oidc.token.admin-roles system property. Additionally, OidcWiremockTestResource sets the token issuer and audience to https://service.example.com , which can be customized with quarkus.test.oidc.token.issuer and quarkus.test.oidc.token.audience system properties. OidcWiremockTestResource can be used to emulate all OIDC providers. 3.5.2. Dev Services for Keycloak Using Dev Services for Keycloak is recommended for integration testing against Keycloak. Dev Services for Keycloak will start and initialize a test container: it will create a quarkus realm, a quarkus-app client ( secret secret), and add alice ( admin and user roles) and bob ( user role) users, where all of these properties can be customized. First, prepare application.properties . You can start with a completely empty application.properties file as Dev Services for Keycloak will register quarkus.oidc.auth-server-url pointing to the running test container as well as quarkus.oidc.client-id=quarkus-app and quarkus.oidc.credentials.secret=secret . However, if you already have all the required quarkus-oidc properties configured, then you only need to associate quarkus.oidc.auth-server-url with the prod profile for Dev Services for Keycloak to start a container. For example: %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus If a custom realm file has to be imported into Keycloak before running the tests, then you can configure Dev Services for Keycloak as follows: %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.keycloak.devservices.realm-path=quarkus-realm.json Finally, write a test code the same way as it is described in the Wiremock section. The only difference is that @QuarkusTestResource is no longer needed: @QuarkusTest public class CodeFlowAuthorizationTest { } 3.5.3. TestSecurity annotation You can use @TestSecurity and @OidcSecurity annotations to test the web-app application endpoint code, which depends on either one of the following injections, or all four: ID JsonWebToken Access JsonWebToken UserInfo OidcConfigurationMetadata For more information, see Use TestingSecurity with injected JsonWebToken . 3.5.4. Checking errors in the logs To see details about the token verification errors, you must enable io.quarkus.oidc.runtime.OidcProvider TRACE level logging: quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".level=TRACE quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".min-level=TRACE To see details about the OidcProvider client initialization errors, enable io.quarkus.oidc.runtime.OidcRecorder TRACE level logging: quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".level=TRACE quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".min-level=TRACE From the quarkus dev console, type j to change the application global log level. 3.6. References OIDC configuration properties Configuring well-known OpenID Connect providers OpenID Connect and OAuth2 client and filters reference guide Dev Services for Keycloak Choosing between OpenID Connect, SmallRye JWT, and OAuth2 authentication mechanisms Combining authentication mechanisms Quarkus Security overview Keycloak documentation OpenID Connect JSON Web Token | [
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false Authorization endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/auth quarkus.oidc.authorization-path=/protocol/openid-connect/auth Token endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token quarkus.oidc.token-path=/protocol/openid-connect/token JWK set endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/certs quarkus.oidc.jwks-path=/protocol/openid-connect/certs UserInfo endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/userinfo quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/token/introspect End-session endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/logout quarkus.oidc.end-session-path=/protocol/openid-connect/logout",
"Metadata is auto-discovered but it does not return an end-session endpoint URL quarkus.oidc.auth-server-url=http://localhost:8180/oidcprovider/account Configure the end-session URL locally. It can be an absolute or relative (to 'quarkus.oidc.auth-server-url') address quarkus.oidc.end-session-path=logout",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=mysecret",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.client-secret.provider.key=mysecret-key This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.client-secret.provider.name=oidc-credentials-provider",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=post",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.jwt.secret-provider.key=mysecret-key This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.jwt.secret-provider.name=oidc-credentials-provider",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key=Base64-encoded private key representation",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-store-file=keystore.jks quarkus.oidc.credentials.jwt.key-store-password=mypassword quarkus.oidc.credentials.jwt.key-password=mykeypassword Private key alias inside the keystore quarkus.oidc.credentials.jwt.key-id=mykeyAlias",
"private_key_jwt client authentication quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem This is a token key identifier 'kid' header - set it if your OIDC provider requires it: Note if the key is represented in a JSON Web Key (JWK) format with a `kid` property, then using 'quarkus.oidc.credentials.jwt.token-key-id' is not necessary. quarkus.oidc.credentials.jwt.token-key-id=mykey Use RS512 signature algorithm instead of the default RS256 quarkus.oidc.credentials.jwt.signature-algorithm=RS512 The token endpoint URL is the default audience value, use the base address URL instead: quarkus.oidc.credentials.jwt.audience=USD{quarkus.oidc-client.auth-server-url} custom subject instead of the client id: quarkus.oidc.credentials.jwt.subject=custom-subject custom issuer instead of the client id: quarkus.oidc.credentials.jwt.issuer=custom-issuer",
"Apple provider configuration sets a 'client_secret_post_jwt' authentication method quarkus.oidc.provider=apple quarkus.oidc.client-id=USD{apple.client-id} quarkus.oidc.credentials.jwt.key-file=ecPrivateKey.pem quarkus.oidc.credentials.jwt.token-key-id=USD{apple.key-id} Apple provider configuration sets ES256 signature algorithm quarkus.oidc.credentials.jwt.subject=USD{apple.subject} quarkus.oidc.credentials.jwt.issuer=USD{apple.issuer}",
"quarkus.oidc.tls.verification=certificate-validation Keystore configuration quarkus.oidc.tls.key-store-file=client-keystore.jks quarkus.oidc.tls.key-store-password=USD{key-store-password} Add more keystore properties if needed: #quarkus.oidc.tls.key-store-alias=keyAlias #quarkus.oidc.tls.key-store-alias-password=keyAliasPassword Truststore configuration quarkus.oidc.tls.trust-store-file=client-truststore.jks quarkus.oidc.tls.trust-store-password=USD{trust-store-password} Add more truststore properties if needed: #quarkus.oidc.tls.trust-store-alias=certAlias",
"quarkus.oidc.provider=strava quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=query",
"quarkus.oidc.introspection-credentials.name=introspection-user-name quarkus.oidc.introspection-credentials.secret=introspection-user-secret",
"package io.quarkus.it.keycloak; import io.quarkus.oidc.OidcConfigurationMetadata; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable public class OidcTokenRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { OidcConfigurationMetadata metadata = contextProps.get(OidcConfigurationMetadata.class.getName()); 1 // Metadata URI is absolute, request URI value is relative if (metadata.getTokenUri().endsWith(request.uri())) { 2 request.putHeader(\"TokenGrantDigest\", calculateDigest(buffer.toString())); } } private String calculateDigest(String bodyString) { // Apply the required digest algorithm to the body string } }",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcEndpoint; import io.quarkus.oidc.common.OidcEndpoint.Type; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable @OidcEndpoint(value = Type.DISCOVERY) 1 public class OidcDiscoveryRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { request.putHeader(\"Discovery\", \"OK\"); } }",
"quarkus.oidc.authentication.extra-params.response_mode=query",
"quarkus.oidc.authentication.error-path=/error",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.OidcRedirectFilter; @ApplicationScoped @Unremovable public class GlobalOidcRedirectFilter implements OidcRedirectFilter { @Override public void filter(OidcRedirectContext context) { if (context.redirectUri().contains(\"/session-expired-page\")) { context.additionalQueryParams().add(\"redirect-filtered\", \"true,\"); 1 context.routingContext().response().putHeader(\"Redirect-Filtered\", \"true\"); 2 } } }",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import org.eclipse.microprofile.jwt.Claims; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.AuthorizationCodeTokens; import io.quarkus.oidc.OidcRedirectFilter; import io.quarkus.oidc.Redirect; import io.quarkus.oidc.Redirect.Location; import io.quarkus.oidc.TenantFeature; import io.quarkus.oidc.runtime.OidcUtils; import io.smallrye.jwt.build.Jwt; @ApplicationScoped @Unremovable @TenantFeature(\"tenant-refresh\") @Redirect(Location.SESSION_EXPIRED_PAGE) 1 public class SessionExpiredOidcRedirectFilter implements OidcRedirectFilter { @Override public void filter(OidcRedirectContext context) { if (context.redirectUri().contains(\"/session-expired-page\")) { AuthorizationCodeTokens tokens = context.routingContext().get(AuthorizationCodeTokens.class.getName()); 2 String userName = OidcUtils.decodeJwtContent(tokens.getIdToken()).getString(Claims.preferred_username.name()); 3 String jwe = Jwt.preferredUserName(userName).jwe() .encryptWithSecret(context.oidcTenantConfig().credentials.secret.get()); 4 OidcUtils.createCookie(context.routingContext(), context.oidcTenantConfig(), \"session_expired\", jwe + \"|\" + context.oidcTenantConfig().tenantId.get(), 10); 5 } } }",
"package io.quarkus.it.keycloak; import jakarta.inject.Inject; import jakarta.ws.rs.CookieParam; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.runtime.OidcUtils; import io.quarkus.oidc.runtime.TenantConfigBean; import io.smallrye.jwt.auth.principal.DefaultJWTParser; import io.vertx.ext.web.RoutingContext; @Path(\"/session-expired-page\") public class SessionExpiredResource { @Inject RoutingContext context; @Inject TenantConfigBean tenantConfig; 1 @GET public String sessionExpired(@CookieParam(\"session_expired\") String sessionExpired) throws Exception { // Cookie format: jwt|<tenant id> String[] pair = sessionExpired.split(\"\\\\|\"); 2 OidcTenantConfig oidcConfig = tenantConfig.getStaticTenantsConfig().get(pair[1]).getOidcTenantConfig(); 3 JsonWebToken jwt = new DefaultJWTParser().decrypt(pair[0], oidcConfig.credentials.secret.get()); 4 OidcUtils.removeCookie(context, oidcConfig, \"session_expired\"); 5 return jwt.getClaim(Claims.preferred_username) + \", your session has expired. \" + \"Please login again at http://localhost:8081/\" + oidcConfig.tenantId.get(); 6 } }",
"import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.security.Authenticated; @Path(\"/web-app\") @Authenticated public class ProtectedResource { @Inject @IdToken JsonWebToken idToken; @GET public String getUserName() { return idToken.getName(); } }",
"import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.AccessTokenCredential; import io.quarkus.security.Authenticated; @Path(\"/web-app\") @Authenticated public class ProtectedResource { @Inject JsonWebToken accessToken; // or // @Inject // AccessTokenCredential accessTokenCredential; @GET public String getReservationOnBehalfOfUser() { String rawAccessToken = accessToken.getRawToken(); //or //String rawAccessToken = accessTokenCredential.getToken(); // Use the raw access token to access a remote endpoint. // For example, use RestClient to set this token as a `Bearer` scheme value of the HTTP `Authorization` header: // `Authorization: Bearer rawAccessToken`. return getReservationfromRemoteEndpoint(rawAccesstoken); } }",
"quarkus.oidc.authentication.pkce-required=true quarkus.oidc.authentication.state-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.token-state-manager.split-tokens=true quarkus.oidc.token-state-manager.encryption-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU",
"package io.quarkus.oidc.test; import jakarta.annotation.Priority; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Alternative; import jakarta.inject.Inject; import io.quarkus.oidc.AuthorizationCodeTokens; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TokenStateManager; import io.quarkus.oidc.runtime.DefaultTokenStateManager; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped @Alternative @Priority(1) public class CustomTokenStateManager implements TokenStateManager { @Inject DefaultTokenStateManager tokenStateManager; @Override public Uni<String> createTokenState(RoutingContext routingContext, OidcTenantConfig oidcConfig, AuthorizationCodeTokens sessionContent, OidcRequestContext<String> requestContext) { return tokenStateManager.createTokenState(routingContext, oidcConfig, sessionContent, requestContext) .map(t -> (t + \"|custom\")); } @Override public Uni<AuthorizationCodeTokens> getTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, OidcRequestContext<AuthorizationCodeTokens> requestContext) { if (!tokenState.endsWith(\"|custom\")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.getTokens(routingContext, oidcConfig, defaultState, requestContext); } @Override public Uni<Void> deleteTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, OidcRequestContext<Void> requestContext) { if (!tokenState.endsWith(\"|custom\")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.deleteTokens(routingContext, oidcConfig, defaultState, requestContext); } }",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.path=/logout Logged-out users should be returned to the /welcome.html site which will offer an option to re-login: quarkus.oidc.logout.post-logout-path=/welcome.html Only the authenticated users can initiate a logout: quarkus.http.auth.permission.authenticated.paths=/logout quarkus.http.auth.permission.authenticated.policy=authenticated All users can see the Welcome page: quarkus.http.auth.permission.public.paths=/welcome.html quarkus.http.auth.permission.public.policy=permit",
"quarkus.oidc.auth-server-url=https://dev-xxx.us.auth0.com quarkus.oidc.client-id=redacted quarkus.oidc.credentials.secret=redacted quarkus.oidc.application-type=web-app quarkus.oidc.tenant-logout.logout.path=/logout quarkus.oidc.tenant-logout.logout.post-logout-path=/welcome.html Auth0 does not return the `end_session_endpoint` metadata property. Instead, you must configure it: quarkus.oidc.end-session-path=v2/logout Auth0 will not recognize the 'post_logout_redirect_uri' query parameter so ensure it is named as 'returnTo': quarkus.oidc.logout.post-logout-uri-param=returnTo Set more properties if needed. For example, if 'client_id' is provided, then a valid logout URI should be set as the Auth0 Application property, without it - as Auth0 Tenant property: quarkus.oidc.logout.extra-params.client_id=USD{quarkus.oidc.client-id}",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.backchannel.path=/back-channel-logout",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.frontchannel.path=/front-channel-logout",
"import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.OidcSession; @Path(\"/service\") public class ServiceResource { @Inject OidcSession oidcSession; @GET @Path(\"logout\") public String logout() { oidcSession.logout().await().indefinitely(); return \"You are logged out\"; } }",
"quarkus.oidc.provider=github quarkus.oidc.client-id=github_app_clientid quarkus.oidc.credentials.secret=github_app_clientsecret user:email scope is requested by default, use 'quarkus.oidc.authentication.scopes' to request different scopes such as `read:user`. See https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps for more information. Consider enabling UserInfo Cache quarkus.oidc.token-cache.max-size=1000 quarkus.oidc.token-cache.time-to-live=5M # Or having UserInfo cached inside IdToken itself quarkus.oidc.cache-user-info-in-idtoken=true",
"package io.quarkus.it.keycloak; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; @Path(\"/github\") @Authenticated public class TokenResource { @Inject UserInfo userInfo; @GET @Path(\"/userinfo\") @Produces(\"application/json\") public String getUserInfo() { return userInfo.getUserInfoString(); } }",
"package io.quarkus.it.keycloak; import java.security.Principal; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.UserInfo; import io.quarkus.security.identity.AuthenticationRequestContext; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.security.identity.SecurityIdentityAugmentor; import io.quarkus.security.runtime.QuarkusSecurityIdentity; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomSecurityIdentityAugmentor implements SecurityIdentityAugmentor { @Override public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) { RoutingContext routingContext = identity.getAttribute(RoutingContext.class.getName()); if (routingContext != null && routingContext.normalizedPath().endsWith(\"/github\")) { QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); UserInfo userInfo = identity.getAttribute(\"userinfo\"); builder.setPrincipal(new Principal() { @Override public String getName() { return userInfo.getString(\"preferred_username\"); } }); identity = builder.build(); } return Uni.createFrom().item(identity); } }",
"package io.quarkus.it.keycloak; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; @Path(\"/service\") @Authenticated public class TokenResource { @Inject SecurityIdentity identity; @GET @Path(\"/google\") @Produces(\"application/json\") public String getGoogleUserName() { return identity.getPrincipal().getName(); } @GET @Path(\"/github\") @Produces(\"application/json\") public String getGitHubUserName() { return identity.getPrincipal().getName(); } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import io.quarkus.oidc.SecurityEvent; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class SecurityEventListener { public void event(@Observes SecurityEvent event) { String tenantId = event.getSecurityIdentity().getAttribute(\"tenant-id\"); RoutingContext vertxContext = event.getSecurityIdentity().getAttribute(RoutingContext.class.getName()); vertxContext.put(\"listener-message\", String.format(\"event:%s,tenantId:%s\", event.getEventType().name(), tenantId)); } }",
"import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.JavaScriptRequestChecker; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomJavaScriptRequestChecker implements JavaScriptRequestChecker { @Override public boolean isJavaScriptRequest(RoutingContext context) { return \"true\".equals(context.request().getHeader(\"HX-Request\")); } }",
"Future<void> callQuarkusService() async { Map<String, String> headers = Map.fromEntries([MapEntry(\"X-Requested-With\", \"JavaScript\")]); await http .get(\"https://localhost:443/serviceCall\") .then((response) { if (response.statusCode == 499) { window.location.assign(\"https://localhost.com:443/serviceCall\"); } }); }",
"quarkus.http.proxy.proxy-address-forwarding=true quarkus.http.proxy.allow-forwarded=false quarkus.http.proxy.enable-forwarded-host=true quarkus.http.proxy.forwarded-host-header=X-ORIGINAL-HOST",
"<dependency> <groupId>org.htmlunit</groupId> <artifactId>htmlunit</artifactId> <exclusions> <exclusion> <groupId>org.eclipse.jetty</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency>",
"testImplementation(\"org.htmlunit:htmlunit\") testImplementation(\"io.quarkus:quarkus-junit5\")",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-oidc-server</artifactId> <scope>test</scope> </dependency>",
"testImplementation(\"io.quarkus:quarkus-test-oidc-server\")",
"keycloak.url is set by OidcWiremockTestResource quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/ quarkus.oidc.client-id=quarkus-web-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app",
"import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.Test; import org.htmlunit.SilentCssErrorHandler; import org.htmlunit.WebClient; import org.htmlunit.html.HtmlForm; import org.htmlunit.html.HtmlPage; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWiremockTestResource; @QuarkusTest @QuarkusTestResource(OidcWiremockTestResource.class) public class CodeFlowAuthorizationTest { @Test public void testCodeFlow() throws Exception { try (final WebClient webClient = createWebClient()) { // the test REST endpoint listens on '/code-flow' HtmlPage page = webClient.getPage(\"http://localhost:8081/code-flow\"); HtmlForm form = page.getFormByName(\"form\"); // user 'alice' has the 'user' role form.getInputByName(\"username\").type(\"alice\"); form.getInputByName(\"password\").type(\"alice\"); page = form.getInputByValue(\"login\").click(); assertEquals(\"alice\", page.getBody().asNormalizedText()); } } private WebClient createWebClient() { WebClient webClient = new WebClient(); webClient.setCssErrorHandler(new SilentCssErrorHandler()); return webClient; } }",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.keycloak.devservices.realm-path=quarkus-realm.json",
"@QuarkusTest public class CodeFlowAuthorizationTest { }",
"quarkus.log.category.\"io.quarkus.oidc.runtime.OidcProvider\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.runtime.OidcProvider\".min-level=TRACE",
"quarkus.log.category.\"io.quarkus.oidc.runtime.OidcRecorder\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.runtime.OidcRecorder\".min-level=TRACE"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/security-oidc-code-flow-authentication |
Chapter 5. Using build strategies | Chapter 5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 5.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 5.1.1. Replacing the Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object, add the following settings to the BuildConfig object: strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure Set the dockerfilePath field for the build to use a different path to locate your Dockerfile: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 5.1.4. Adding Docker build arguments You can set Docker build arguments using the buildArgs array. The build arguments are passed to Docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set Docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "version" value: "latest" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 5.1.5. Squashing layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 5.1.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Required. This value must be set to true . Provides a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources Build inputs Input secrets and config maps 5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, complete one of the following actions: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition in the BuildConfig object. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 The build process appends run , assemble , and save-artifacts to the path. If any or all scripts with these names exist, the build process uses these scripts in place of scripts with the same name that are provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image: environment files and BuildConfig environment values. The variables that you provide using either method will be present during the build process and in the output image. 5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 5.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 5.2.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Required. This value must be set to true . Provides a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources Build inputs Input secrets and config maps 5.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 5.3.1. Using FROM image for custom builds You can use the customStrategy.from section to indicate the image to use for the custom build. Procedure Set the customStrategy.from section: strategy: customStrategy: from: kind: "DockerImage" name: "openshift/sti-image-builder" 5.3.2. Using secrets in custom builds In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod. Procedure To mount each secret at a specific location, edit the secretSource and mountPath fields of the strategy YAML file: strategy: customStrategy: secrets: - secretSource: 1 name: "secret1" mountPath: "/tmp/secret1" 2 - secretSource: name: "secret2" mountPath: "/tmp/secret2" 1 secretSource is a reference to a secret in the same namespace as the build. 2 mountPath is the path inside the custom builder where the secret should be mounted. 5.3.3. Using environment variables for custom builds To make environment variables available to the custom build process, you can add environment variables to the customStrategy definition of the build configuration. The environment variables defined there are passed to the pod that runs the custom build. Procedure Define a custom HTTP proxy to be used during build: customStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" To manage environment variables defined in the build configuration, enter the following command: USD oc set env <enter_variables> 5.3.4. Using custom builder images OpenShift Container Platform's custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy. A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images. Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests. 5.3.4.1. Custom builder image Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build: Table 5.2. Custom Builder Environment Variables Variable Name Description BUILD The entire serialized JSON of the Build object definition. If you must use a specific API version for serialization, you can set the buildAPIVersion parameter in the custom strategy specification of the build configuration. SOURCE_REPOSITORY The URL of a Git repository with source to be built. SOURCE_URI Uses the same value as SOURCE_REPOSITORY . Either can be used. SOURCE_CONTEXT_DIR Specifies the subdirectory of the Git repository to be used when building. Only present if defined. SOURCE_REF The Git reference to be built. ORIGIN_VERSION The version of the OpenShift Container Platform master that created this build object. OUTPUT_REGISTRY The container image registry to push the image to. OUTPUT_IMAGE The container image tag name for the image being built. PUSH_DOCKERCFG_PATH The path to the container registry credentials for running a podman push operation. 5.3.4.2. Custom builder workflow Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform: The Build object definition contains all the necessary information about input parameters for the build. Run the build process. If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables. 5.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 5.4.1. Understanding OpenShift Container Platform pipelines Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario. OpenShift Container Platform Jenkins Sync Plugin The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the OpenShift Container Platform web console. Integration with the Jenkins Git plugin, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plugin. Synchronization of secrets into Jenkins credential entries. OpenShift Container Platform Jenkins Client Plugin The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Container Platform Jenkins image. For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.4.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.4.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 5.4.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 5.4.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline After you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the OpenShift Container Platform DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your OpenShift Container Platform cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 5.5. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console: Create a new OpenShift Container Platform project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 5.6. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration. | [
"strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"",
"strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile",
"dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"",
"strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers",
"spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1",
"sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"",
"strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"",
"customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"oc set env <enter_variables>",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1",
"jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"",
"oc project <project_name>",
"oc new-app jenkins-ephemeral 1",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline",
"def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }",
"oc create -f nodejs-sample-pipeline.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml",
"oc start-build nodejs-sample-pipeline"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_buildconfig/build-strategies |
Chapter 77. stack | Chapter 77. stack This chapter describes the commands under the stack command. 77.1. stack abandon Abandon stack and output results. Usage: Table 77.1. Positional arguments Value Summary <stack> Name or id of stack to abandon Table 77.2. Command arguments Value Summary -h, --help Show this help message and exit --output-file <output-file> File to output abandon results Table 77.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.2. stack adopt Adopt a stack. Usage: Table 77.7. Positional arguments Value Summary <stack-name> Name of the stack to adopt Table 77.8. Command arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --timeout <timeout> Stack creation timeout in minutes --enable-rollback Enable rollback on create/update failure --parameter <key=value> Parameter values used to create the stack. can be specified multiple times --wait Wait until stack adopt completes --adopt-file <adopt-file> Path to adopt stack data file Table 77.9. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.11. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.3. stack cancel Cancel current task for a stack. Supported tasks for cancellation: * update * create Usage: Table 77.13. Positional arguments Value Summary <stack> Stack(s) to cancel (name or id) Table 77.14. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for cancel to complete --no-rollback Cancel without rollback Table 77.15. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.16. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.4. stack check Check a stack. Usage: Table 77.19. Positional arguments Value Summary <stack> Stack(s) to check update (name or id) Table 77.20. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for check to complete Table 77.21. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.22. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.5. stack create Create a stack. Usage: Table 77.25. Positional arguments Value Summary <stack-name> Name of the stack to create Table 77.26. Command arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times -s <files-container>, --files-container <files-container> Swift files container name. local files other than root template would be ignored. If other files are not found in swift, heat engine would raise an error. --timeout <timeout> Stack creating timeout in minutes --pre-create <resource> Name of a resource to set a pre-create hook to. Resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource``. This can be specified multiple times --enable-rollback Enable rollback on create/update failure --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --parameter-file <key=file> Parameter values from file used to create the stack. This can be specified multiple times. Parameter values would be the content of the file --wait Wait until stack goes to create_complete or CREATE_FAILED --poll SECONDS Poll interval in seconds for use with --wait, defaults to 5. --tags <tag1,tag2... > A list of tags to associate with the stack --dry-run Do not actually perform the stack create, but show what would be created -t <template>, --template <template> Path to the template Table 77.27. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.28. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.29. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.30. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.6. stack delete Delete stack(s). Usage: Table 77.31. Positional arguments Value Summary <stack> Stack(s) to delete (name or id) Table 77.32. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes) --wait Wait for stack delete to complete 77.7. stack environment show Show a stack's environment. Usage: Table 77.33. Positional arguments Value Summary <NAME or ID> Name or id of stack to query Table 77.34. Command arguments Value Summary -h, --help Show this help message and exit Table 77.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.8. stack event list List events. Usage: Table 77.39. Positional arguments Value Summary <stack> Name or id of stack to show events for Table 77.40. Command arguments Value Summary -h, --help Show this help message and exit --resource <resource> Name of resource to show events for. note: this cannot be specified with --nested-depth --filter <key=value> Filter parameters to apply on returned events --limit <limit> Limit the number of events returned --marker <id> Only return events that appear after the given id --nested-depth <depth> Depth of nested stacks from which to display events. Note: this cannot be specified with --resource --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc). Specify multiple times to sort on multiple keys. Sort key can be: "event_time" (default), "resource_name", "links", "logical_resource_id", "resource_status", "resource_status_reason", "physical_resource_id", or "id". You can leave the key empty and specify ":desc" for sorting by reverse time. --follow Print events until process is halted Table 77.41. Output formatter options Value Summary -f {csv,json,log,table,value,yaml}, --format {csv,json,log,table,value,yaml} The output format, defaults to log -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.42. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.9. stack event show Show event details. Usage: Table 77.45. Positional arguments Value Summary <stack> Name or id of stack to show events for <resource> Name of the resource event belongs to <event> Id of event to display details for Table 77.46. Command arguments Value Summary -h, --help Show this help message and exit Table 77.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.10. stack export Export stack data json. Usage: Table 77.51. Positional arguments Value Summary <stack> Name or id of stack to export Table 77.52. Command arguments Value Summary -h, --help Show this help message and exit --output-file <output-file> File to output export data Table 77.53. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.54. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.55. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.56. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.11. stack failures list Show information about failed stack resources. Usage: Table 77.57. Positional arguments Value Summary <stack> Stack to display (name or id) Table 77.58. Command arguments Value Summary -h, --help Show this help message and exit --long Show full deployment logs in output 77.12. stack file list Show a stack's files map. Usage: Table 77.59. Positional arguments Value Summary <NAME or ID> Name or id of stack to query Table 77.60. Command arguments Value Summary -h, --help Show this help message and exit Table 77.61. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.62. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.63. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.64. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.13. stack hook clear Clear resource hooks on a given stack. Usage: Table 77.65. Positional arguments Value Summary <stack> Stack to display (name or id) <resource> Resource names with hooks to clear. resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource`` Table 77.66. Command arguments Value Summary -h, --help Show this help message and exit --pre-create Clear the pre-create hooks --pre-update Clear the pre-update hooks --pre-delete Clear the pre-delete hooks 77.14. stack hook poll List resources with pending hook for a stack. Usage: Table 77.67. Positional arguments Value Summary <stack> Stack to display (name or id) Table 77.68. Command arguments Value Summary -h, --help Show this help message and exit --nested-depth <nested-depth> Depth of nested stacks from which to display hooks Table 77.69. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.70. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.72. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.15. stack list List stacks. Usage: Table 77.73. Command arguments Value Summary -h, --help Show this help message and exit --deleted Include soft-deleted stacks in the stack listing --nested Include nested stacks in the stack listing --hidden Include hidden stacks in the stack listing --property <key=value> Filter properties to apply on returned stacks (repeat to filter on multiple properties) --tags <tag1,tag2... > List of tags to filter by. can be combined with --tag- mode to specify how to filter tags --tag-mode <mode> Method of filtering tags. must be one of "any", "not", or "not-any". If not specified, multiple tags will be combined with the boolean AND expression --limit <limit> The number of stacks returned --marker <id> Only return stacks that appear after the given id --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc). Specify multiple times to sort on multiple properties --all-projects Include all projects (admin only) --short List fewer fields in output --long List additional fields in output, this is implied by --all-projects Table 77.74. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.75. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.76. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.77. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.16. stack output list List stack outputs. Usage: Table 77.78. Positional arguments Value Summary <stack> Name or id of stack to query Table 77.79. Command arguments Value Summary -h, --help Show this help message and exit Table 77.80. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.81. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.83. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.17. stack output show Show stack output. Usage: Table 77.84. Positional arguments Value Summary <stack> Name or id of stack to query <output> Name of an output to display Table 77.85. Command arguments Value Summary -h, --help Show this help message and exit --all Display all stack outputs Table 77.86. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.87. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.88. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.89. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.18. stack resource list List stack resources. Usage: Table 77.90. Positional arguments Value Summary <stack> Name or id of stack to query Table 77.91. Command arguments Value Summary -h, --help Show this help message and exit --long Enable detailed information presented for each resource in resource list -n <nested-depth>, --nested-depth <nested-depth> Depth of nested stacks from which to display resources --filter <key=value> Filter parameters to apply on returned resources based on their name, status, type, action, id and physical_resource_id Table 77.92. Output formatter options Value Summary -f {csv,dot,json,table,value,yaml}, --format {csv,dot,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.93. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.94. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.19. stack resource mark unhealthy Set resource's health. Usage: Table 77.96. Positional arguments Value Summary <stack> Name or id of stack the resource belongs to <resource> Name of the resource reason Reason for state change Table 77.97. Command arguments Value Summary -h, --help Show this help message and exit --reset Set the resource as healthy 77.20. stack resource metadata Show resource metadata Usage: Table 77.98. Positional arguments Value Summary <stack> Stack to display (name or id) <resource> Name of the resource to show the metadata for Table 77.99. Command arguments Value Summary -h, --help Show this help message and exit Table 77.100. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.101. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.102. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.103. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.21. stack resource show Display stack resource. Usage: Table 77.104. Positional arguments Value Summary <stack> Name or id of stack to query <resource> Name of resource Table 77.105. Command arguments Value Summary -h, --help Show this help message and exit --with-attr <attribute> Attribute to show, can be specified multiple times Table 77.106. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.108. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.22. stack resource signal Signal a resource with optional data. Usage: Table 77.110. Positional arguments Value Summary <stack> Name or id of stack the resource belongs to <resource> Name of the resoure to signal Table 77.111. Command arguments Value Summary -h, --help Show this help message and exit --data <data> Json data to send to the signal handler --data-file <data-file> File containing json data to send to the signal handler 77.23. stack resume Resume a stack. Usage: Table 77.112. Positional arguments Value Summary <stack> Stack(s) to resume (name or id) Table 77.113. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for resume to complete Table 77.114. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.115. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.116. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.117. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.24. stack show Show stack details. Usage: Table 77.118. Positional arguments Value Summary <stack> Stack to display (name or id) Table 77.119. Command arguments Value Summary -h, --help Show this help message and exit --no-resolve-outputs Do not resolve outputs of the stack. Table 77.120. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.121. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.122. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.123. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.25. stack snapshot create Create stack snapshot. Usage: Table 77.124. Positional arguments Value Summary <stack> Name or id of stack Table 77.125. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of snapshot Table 77.126. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.127. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.128. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.129. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.26. stack snapshot delete Delete stack snapshot. Usage: Table 77.130. Positional arguments Value Summary <stack> Name or id of stack <snapshot> Id of stack snapshot Table 77.131. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes) 77.27. stack snapshot list List stack snapshots. Usage: Table 77.132. Positional arguments Value Summary <stack> Name or id of stack containing the snapshots Table 77.133. Command arguments Value Summary -h, --help Show this help message and exit Table 77.134. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.135. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.136. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.137. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.28. stack snapshot restore Restore stack snapshot Usage: Table 77.138. Positional arguments Value Summary <stack> Name or id of stack containing the snapshot <snapshot> Id of the snapshot to restore Table 77.139. Command arguments Value Summary -h, --help Show this help message and exit 77.29. stack snapshot show Show stack snapshot. Usage: Table 77.140. Positional arguments Value Summary <stack> Name or id of stack containing the snapshot <snapshot> Id of the snapshot to show Table 77.141. Command arguments Value Summary -h, --help Show this help message and exit Table 77.142. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.143. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.144. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.145. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.30. stack suspend Suspend a stack. Usage: Table 77.146. Positional arguments Value Summary <stack> Stack(s) to suspend (name or id) Table 77.147. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for suspend to complete Table 77.148. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 77.149. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.150. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.151. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.31. stack template show Display stack template. Usage: Table 77.152. Positional arguments Value Summary <stack> Name or id of stack to query Table 77.153. Command arguments Value Summary -h, --help Show this help message and exit Table 77.154. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.155. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.156. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.157. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.32. stack update Update a stack. Usage: Table 77.158. Positional arguments Value Summary <stack> Name or id of stack to update Table 77.159. Command arguments Value Summary -h, --help Show this help message and exit -t <template>, --template <template> Path to the template -s <files-container>, --files-container <files-container> Swift files container name. local files other than root template would be ignored. If other files are not found in swift, heat engine would raise an error. -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --pre-update <resource> Name of a resource to set a pre-update hook to. Resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource``. This can be specified multiple times --timeout <timeout> Stack update timeout in minutes --rollback <value> Set rollback on update failure. value "enabled" sets rollback to enabled. Value "disabled" sets rollback to disabled. Value "keep" uses the value of existing stack to be updated (default) --dry-run Do not actually perform the stack update, but show what would be changed --show-nested Show nested stacks when performing --dry-run --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --parameter-file <key=file> Parameter values from file used to create the stack. This can be specified multiple times. Parameter value would be the content of the file --existing Re-use the template, parameters and environment of the current stack. If the template argument is omitted then the existing template is used. If no --environment is specified then the existing environment is used. Parameters specified in --parameter will patch over the existing values in the current stack. Parameters omitted will keep the existing values --clear-parameter <parameter> Remove the parameters from the set of parameters of current stack for the stack-update. The default value in the template will be used. This can be specified multiple times --tags <tag1,tag2... > An updated list of tags to associate with the stack --wait Wait until stack goes to update_complete or UPDATE_FAILED --converge Stack update with observe on reality. Table 77.160. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 77.161. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 77.162. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.163. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack stack abandon [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--output-file <output-file>] <stack>",
"openstack stack adopt [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [--timeout <timeout>] [--enable-rollback] [--parameter <key=value>] [--wait] --adopt-file <adopt-file> <stack-name>",
"openstack stack cancel [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] [--no-rollback] <stack> [<stack> ...]",
"openstack stack check [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] <stack> [<stack> ...]",
"openstack stack create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [-s <files-container>] [--timeout <timeout>] [--pre-create <resource>] [--enable-rollback] [--parameter <key=value>] [--parameter-file <key=file>] [--wait] [--poll SECONDS] [--tags <tag1,tag2...>] [--dry-run] -t <template> <stack-name>",
"openstack stack delete [-h] [-y] [--wait] <stack> [<stack> ...]",
"openstack stack environment show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <NAME or ID>",
"openstack stack event list [-h] [-f {csv,json,log,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--resource <resource>] [--filter <key=value>] [--limit <limit>] [--marker <id>] [--nested-depth <depth>] [--sort <key>[:<direction>]] [--follow] <stack>",
"openstack stack event show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <resource> <event>",
"openstack stack export [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--output-file <output-file>] <stack>",
"openstack stack failures list [-h] [--long] <stack>",
"openstack stack file list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <NAME or ID>",
"openstack stack hook clear [-h] [--pre-create] [--pre-update] [--pre-delete] <stack> <resource> [<resource> ...]",
"openstack stack hook poll [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--nested-depth <nested-depth>] <stack>",
"openstack stack list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--deleted] [--nested] [--hidden] [--property <key=value>] [--tags <tag1,tag2...>] [--tag-mode <mode>] [--limit <limit>] [--marker <id>] [--sort <key>[:<direction>]] [--all-projects] [--short] [--long]",
"openstack stack output list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <stack>",
"openstack stack output show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all] <stack> [<output>]",
"openstack stack resource list [-h] [-f {csv,dot,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [-n <nested-depth>] [--filter <key=value>] <stack>",
"openstack stack resource mark unhealthy [-h] [--reset] <stack> <resource> [reason]",
"openstack stack resource metadata [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <resource>",
"openstack stack resource show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--with-attr <attribute>] <stack> <resource>",
"openstack stack resource signal [-h] [--data <data>] [--data-file <data-file>] <stack> <resource>",
"openstack stack resume [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] <stack> [<stack> ...]",
"openstack stack show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--no-resolve-outputs] <stack>",
"openstack stack snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] <stack>",
"openstack stack snapshot delete [-h] [-y] <stack> <snapshot>",
"openstack stack snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <stack>",
"openstack stack snapshot restore [-h] <stack> <snapshot>",
"openstack stack snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <snapshot>",
"openstack stack suspend [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] <stack> [<stack> ...]",
"openstack stack template show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack>",
"openstack stack update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-t <template>] [-s <files-container>] [-e <environment>] [--pre-update <resource>] [--timeout <timeout>] [--rollback <value>] [--dry-run] [--show-nested] [--parameter <key=value>] [--parameter-file <key=file>] [--existing] [--clear-parameter <parameter>] [--tags <tag1,tag2...>] [--wait] [--converge] <stack>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/stack |
1.2. High Availability Add-On Introduction | 1.2. High Availability Add-On Introduction The High Availability Add-On is an integrated set of software components that can be deployed in a variety of configurations to suit your needs for performance, high availability, load balancing, scalability, file sharing, and economy. The High Availability Add-On consists of the following major components: Cluster infrastructure - Provides fundamental functions for nodes to work together as a cluster: configuration file management, membership management, lock management, and fencing. High availability Service Management - Provides failover of services from one cluster node to another in case a node becomes inoperative. Cluster administration tools - Configuration and management tools for setting up, configuring, and managing the High Availability Add-On. The tools are for use with the Cluster Infrastructure components, the high availability and Service Management components, and storage. You can supplement the High Availability Add-On with the following components: Red Hat GFS2 (Global File System 2) - Part of the Resilient Storage Add-On, this provides a cluster file system for use with the High Availability Add-On. GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure. Cluster Logical Volume Manager (CLVM) - Part of the Resilient Storage Add-On, this provides volume management of cluster storage. CLVM support also requires cluster infrastructure. HAProxy - Routing software that provides high availability load balancing and failover in layer 4 (TCP) and layer 7 (HTTP, HTTPS) services. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-rhcs-intro-HAAO |
Extension APIs | Extension APIs OpenShift Container Platform 4.18 Reference guide for extension APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extension_apis/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.5/making-open-source-more-inclusive |
Chapter 18. Red Hat Software Collections | Chapter 18. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set allows for optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can pick choose at any time which package version they want to run. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . Red Hat Developer Toolset is now a part of Red Hat Software Collections, included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides the current versions of the GNU Compiler Collection, GNU Debugger, Eclipse development platform, and other development, debugging, and performance monitoring tools. See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-red_hat_software_collections |
Chapter 195. Kubernetes Components | Chapter 195. Kubernetes Components Available as of Camel version 2.17 The Kubernetes components integrate your application with Kubernetes standalone or on top of Openshift. The camel-kubernetes consists of 13 components: Kubernetes ConfigMap Kubernetes Namespace Kubernetes Node Kubernetes Persistent Volume Kubernetes Persistent Volume Claim Kubernetes Pod Kubernetes Replication Controller Kubernetes Resource Quota Kubernetes Secrets Kubernetes Service Account Kubernetes Service In OpenShift, also: Kubernetes Build Config Kubernetes Build Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kubernetes</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 195.1. Headers Name Type Description CamelKubernetesOperation String The Producer operation CamelKubernetesNamespaceName String The Namespace name CamelKubernetesNamespaceLabels Map The Namespace Labels CamelKubernetesServiceLabels Map The Service labels CamelKubernetesServiceName String The Service name CamelKubernetesServiceSpec io.fabric8.kubernetes.api.model.ServiceSpec The Spec for a Service CamelKubernetesReplicationControllersLabels Map Replication controller labels CamelKubernetesReplicationControllerName String Replication controller name CamelKubernetesReplicationControllerSpec io.fabric8.kubernetes.api.model.ReplicationControllerSpec The Spec for a Replication Controller CamelKubernetesReplicationControllerReplicas Integer The number of replicas for a Replication Controller during the Scale operation CamelKubernetesPodsLabels Map Pod labels CamelKubernetesPodName String Pod name CamelKubernetesPodSpec io.fabric8.kubernetes.api.model.PodSpec The Spec for a Pod CamelKubernetesPersistentVolumesLabels Map Persistent Volume labels CamelKubernetesPersistentVolumesName String Persistent Volume name CamelKubernetesPersistentVolumesClaimsLabels Map Persistent Volume Claim labels CamelKubernetesPersistentVolumesClaimsName String Persistent Volume Claim name CamelKubernetesPersistentVolumesClaimsSpec io.fabric8.kubernetes.api.model.PersistentVolumeClaimSpec The Spec for a Persistent Volume claim CamelKubernetesSecretsLabels Map Secret labels CamelKubernetesSecretsName String Secret name CamelKubernetesSecret io.fabric8.kubernetes.api.model.Secret A Secret Object CamelKubernetesResourcesQuotaLabels Map Resource Quota labels CamelKubernetesResourcesQuotaName String Resource Quota name CamelKubernetesResourceQuotaSpec io.fabric8.kubernetes.api.model.ResourceQuotaSpec The Spec for a Resource Quota CamelKubernetesServiceAccountsLabels Map Service Account labels CamelKubernetesServiceAccountName String Service Account name CamelKubernetesServiceAccount io.fabric8.kubernetes.api.model.ServiceAccount A Service Account object CamelKubernetesNodesLabels Map Node labels CamelKubernetesNodeName String Node name CamelKubernetesBuildsLabels Map Openshift Build labels CamelKubernetesBuildName String Openshift Build name CamelKubernetesBuildConfigsLabels Map Openshift Build Config labels CamelKubernetesBuildConfigName String Openshift Build Config name CamelKubernetesEventAction io.fabric8.kubernetes.client.Watcher.Action Action watched by the consumer CamelKubernetesEventTimestamp String Timestamp of the action watched by the consumer CamelKubernetesConfigMapName String ConfigMap name CamelKubernetesConfigMapsLabels Map ConfigMap labels CamelKubernetesConfigData Map ConfigMap Data 195.2. Usage 195.2.1. Producer examples Here we show some examples of producer using camel-kubernetes. 195.2.2. Create a pod from("direct:createPod") .toF("kubernetes-pods://%s?oauthToken=%s&operation=createPod", host, authToken); By using the KubernetesConstants.KUBERNETES_POD_SPEC header you can specify your PodSpec and pass it to this operation. 195.2.3. Delete a pod from("direct:createPod") .toF("kubernetes-pods://%s?oauthToken=%s&operation=deletePod", host, authToken); By using the KubernetesConstants.KUBERNETES_POD_NAME header you can specify your Pod name and pass it to this operation. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kubernetes</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"from(\"direct:createPod\") .toF(\"kubernetes-pods://%s?oauthToken=%s&operation=createPod\", host, authToken);",
"from(\"direct:createPod\") .toF(\"kubernetes-pods://%s?oauthToken=%s&operation=deletePod\", host, authToken);"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes_components |
Chapter 6. Populating Directory Databases | Chapter 6. Populating Directory Databases Databases contain the directory data managed by Red Hat Directory Server. 6.1. Importing Data Directory Server can populate a database with data by: Importing data Important To import data, you must store the LDIF file that you want to import in the /var/lib/dirsrv/slapd- instance_name /ldif/ directory. Directory Server uses PrivateTmp systemd directive by default. As a result, if you export LDIF files into the /tmp/ or /var/tmp/ system directories, Directory Server does not see these LDIF files during import. For more information about PrivateTmp , see systemd.exec(5) man page. Initializing a database for replication The following table describes the differences between an import and initializing databases: Table 6.1. Import Method Comparison Action Import Initialize Database Overwrites database No Yes LDAP operations Add, modify, delete Add only Performance More time-consuming Fast Partition specialty Works on all partitions Local partitions only Response to server failure Best effort (all changes made up to the point of the failure remain) Atomic (all changes are lost after a failure) LDIF file location Local to the web console Local to the web console or local to server Imports configuration information ( cn=config ) Yes No 6.1.1. Setting EntryUSN Initial Values During Import Entry update sequence numbers (USNs) are not preserved when entries are exported from one server and imported into another. As Section 4.1, "Tracking Modifications to the Database through Update Sequence Numbers" explains, entry USNs are assigned for operations that happen on a local server, so it does not make sense to import those USNs onto another server. However, it is possible to configure an initial entry USN value for entries when importing a database or initializing a database (such as when a replica is initialized for replication). This is done by setting the nsslapd-entryusn-import-initval parameter, which sets a starting USN for all imported entries. There are two possible values for nsslapd-entryusn-import-initval : An integer, which is the explicit start number used for every imported entry. , which means that every imported entry uses whatever the highest entry USN value was on the server before the import operation, incremented by one. If nsslapd-entryusn-import-initval is not set, then all entry USNs begin at zero. Example 6.1. How the nsslapd-entryusn-import-initval Parameter works For example, if the highest value on the server is 1000 before the import or initialization operation, and the nsslapd-entryusn-import-initval value is , then every imported entry is assigned a USN of 1001 : To set an initial value for entry USNs, add the nsslapd-entryusn-import-initval parameter to the server into which data are being imported or to the supplier server which will perform the initialization. For example: Note In multi-supplier replication, the nsslapd-entryusn-import-initval parameter is not replicated between servers. This means that the value must be set specifically on whichever supplier server is being used to initialize a replica. For example, if the Supplier1 host has nsslapd-entryusn-import-initval set to and is used to initialize a replica, then the entry USNs for imported entries have the highest value plus one. If the Supplier2 host does not have nsslapd-entryusn-import-initval set and is used to initialize a replica, then all entry USNs for imported entries begin at zero - even if Supplier1 and Supplier2 have a multi-supplier replication agreement between them. 6.1.2. Importing Using the Command Line Directory Server supports importing data while the instance is running or while the instance is offline: Use one of the following methods if the instance is running: Use the dsconf backend import command. See Section 6.1.2.1.1, "Importing Using the dsconf backend import Command" . Create a cn=tasks entry. See Section 6.1.2.1.2, "Importing Data Using a cn=tasks Entry" . If the instance is offline, use the dsctl ldif2db command. See Section 6.1.2.2, "Importing Data While the Server is Offline" . Warning When you start an import operation, Directory Server first removes all existing data from the database and subsequently imports the data from the LDIF file. If the import fails, for example, because the LDIF file does not exist, the server has already removed the data from the database. Note that the LDIF files used for import operations must use UTF-8 character set encoding. Import operations do not convert data from the local character set encoding to UTF-8 . Additionally, all imported LDIF files must contain the root suffix entry. Directory Server runs import operations as the dirsrv user. Therefore, the permissions of the LDIF file must allow this user to read the file. 6.1.2.1. Importing Data While the Server is Running This section describes how you can import data while Directory Server is running. 6.1.2.1.1. Importing Using the dsconf backend import Command Use the dsconf backend import command to automatically create a task that imports data from an LDIF file. For example, to import the /var/lib/dirsrv/slapd- instance_name /ldif/ instance_name - database_name - time_stamp .ldif file into the userRoot database: Create the suffix if it does not exist. For details, see Section 2.1.1, "Creating Suffixes" . If the LDIF you want to import does not contain statements that add the suffix entry, create this entry manually as described in Section 3.1.3.3, "Creating a Root Entry" . Import the LDIF file: The dsconf backend import command supports additional options, for example, to exclude a specific suffix. To display all available options, enter: 6.1.2.1.2. Importing Data Using a cn=tasks Entry The cn=tasks,cn=config entry in the Directory Server configuration is a container entry for temporary entries the server uses to manage tasks. To initiate an import operation, create a task in the cn=import,cn=tasks,cn=config entry. An import task entry requires the following attributes: cn : Sets the unique name of the task. nsFilename : Sets the name of the LDIF file to import. nsInstance : Sets the name of the database into which the file should be imported. Import tasks support additional parameters, for example, to exclude suffixes. For a complete list, see the cn=import section in the Red Hat Directory Server Configuration, Command, and File Reference . For example, to add a task that imports the content of the /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif file into the userRoot database: Create the suffix if it does not exist. For details, see Section 2.1.1, "Creating Suffixes" . If the LDIF you want to import does not contain statements that add the suffix entry, create this entry manually as described in Section 3.1.3.3, "Creating a Root Entry" . Add the import task: When the task is completed, the entry is removed from the directory configuration. 6.1.2.2. Importing Data While the Server is Offline If the server is offline when you import data, use the dsctl ldif2db command: Create the suffix if it does not exist. For details, see Section 2.1.1, "Creating Suffixes" . If the LDIF you want to import does not contain statements that add the suffix entry, create this entry manually as described in Section 3.1.3.3, "Creating a Root Entry" . Stop the instance: Import the data from the LDIF file. For example, to import the /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif file into the userRoot database: Warning If the database specified in the command does not correspond with the suffix contained in the LDIF file, all data contained in the database is deleted, and the import fails. Start the instance: 6.1.3. Importing Data Using the Web Console To import data from an LDIF file using the web console: Create the suffix if it does not exist. For details, see Section 2.1.1, "Creating Suffixes" . If the LDIF you want to import does not contain statements that add the suffix entry, create this entry manually as described in Section 3.1.3.3, "Creating a Root Entry" . Store the LDIF file you want to import in the /var/lib/dirsrv/slapd- instance_name /ldif/ directory. Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Click Suffix Tasks , and select Initialize Suffix . Select the LDIF file to import or enter the full path to the file. Select Yes, I am sure. , and click Initialize Database to confirm. | [
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x \"(cn=*)\" entryusn dn: dc=example,dc=com entryusn: 1001 dn: ou=Accounting,dc=example,dc=com entryusn: 1001 dn: ou=Product Development,dc=example,dc=com entryusn: 1001 dn: uid= user_name ,ou=people,dc=example,dc=com entryusn: 1001",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-entryusn-import-initval= next",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend import userRoot /var/lib/dirsrv/slapd- instance_name /ldif/ instance_name - database_name - time_stamp .ldif The import task has finished successfully",
"dsconf ldap://server.example.com backend import --help",
"ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn= example_import ,cn=import,cn=tasks,cn=config changetype: add objectclass: extensibleObject cn: example_import nsFilename: /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif nsInstance: userRoot",
"dsctl instance_name stop",
"dsctl instance_name ldif2db userroot /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif OK group dirsrv exists OK user dirsrv exists [17/Jul/2018:13:42:42.015554231 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [17/Jul/2018:13:42:44.302630629 +0200] - INFO - import_main_offline - import userroot: Import complete. Processed 160 entries in 2 seconds. (80.00 entries/sec) ldif2db successful",
"dsctl instance_name start"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/populating_directory_databases |
Chapter 3. Configuring and Setting Up Remote Jobs | Chapter 3. Configuring and Setting Up Remote Jobs Use this section as a guide to configuring Satellite to execute jobs on remote hosts. Any command that you want to apply to a remote host must be defined as a job template. After you have defined a job template you can execute it multiple times. 3.1. About Running Jobs on Hosts You can run jobs on hosts remotely from Capsules using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution. For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the Capsule base operating system. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. Communication occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. Remote execution uses the SSH service that must be enabled and running on the target host. Ensure that the remote execution Capsule has access to port 22 on the target hosts. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in the Managing Hosts guide. Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates . Note Any Capsule Server base operating system is a client of Satellite Server's internal Capsule, and therefore this section applies to any type of host connected to Satellite Server, including Capsules. You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values. In addition, you can specify custom values for templates when you run the command. For more information, see Executing a Remote Job . 3.2. Remote Execution Workflow When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the Ansible feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job using the Capsule the host is registered to. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 3.3. Permissions for Remote Execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in the Administering Red Hat Satellite guide. The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 3.4. Creating a Job Template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see the appendices in the Managing Hosts guide. CLI procedure To create a job template using a template-definition file, enter the following command: 3.5. Configuring the Fallback to Any Capsule Remote Execution Setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. For example, to set the value to true , enter the following command: 3.6. Configuring the Global Capsule Remote Execution Setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. For example, to set the value to true , enter the following command: 3.7. Configuring Satellite to Use an Alternative Directory to Execute Remote Jobs on Hosts Ansible puts its own files it requires into the USDHOME/.ansible/tmp directory, where USDHOME is the home directory of the remote user. You have the option to set a different directory if required. Procedure Create a new directory, for example new_place : Copy the SELinux context from the default var directory: Configure the system: 3.8. Distributing SSH Keys for Remote Execution To use SSH keys for authenticating remote execution connections, you must distribute the public SSH key from Capsule to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 3.9, "Distributing SSH Keys for Remote Execution Manually" . Section 3.10, "Using the Satellite API to Obtain SSH Keys for Remote Execution" . Section 3.11, "Configuring a Kickstart Template to Distribute SSH Keys during Provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance . 3.9. Distributing SSH Keys for Remote Execution Manually To distribute SSH keys manually, complete the following steps: Procedure Enter the following command on Capsule. Repeat for each target host you want to manage: To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 3.10. Using the Satellite API to Obtain SSH Keys for Remote Execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 3.11. Configuring a Kickstart Template to Distribute SSH Keys during Provisioning You can add a remote_execution_ssh_keys snippet to your custom kickstart template to deploy SSH Keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Therefore, Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 3.12. Configuring a keytab for Kerberos Ticket Granting Tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 3.13. Configuring Kerberos Authentication for Remote Execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. To confirm that Kerberos authentication is ready to use, run a remote job on the host. 3.14. Setting up Job Templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in the Managing Hosts guide. Ansible Considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in Satellite. For more information, see Synchronizing Repository Templates in the Managing Hosts guide. Parameter Variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. If you do not want your Ansible job template to accept parameter variables at run time, in the Satellite web UI, navigate to Administer > Settings and click the Ansible tab. In the Top level Ansible variables row, change the Value parameter to No . 3.15. Executing a Remote Job You can execute a job that is based on a job template against one or more hosts. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target hosts on which you want to execute a remote job. You can use the search field to filter the host list. From the Select Action list, select Schedule Remote Job . On the Job invocation page, define the main job settings: Select the Job category and the Job template you want to use. Optional: Select a stored search string in the Bookmark list to specify the target hosts. Optional: Further limit the targeted hosts by entering a Search query . The Resolves to line displays the number of hosts affected by your query. Use the refresh button to recalculate the number after changing the query. The preview icon lists the targeted hosts. The remaining settings depend on the selected job template. See Creating a Job Template for information on adding custom parameters to a template. Optional: To configure advanced settings for the job, click Display advanced fields . Some of the advanced settings depend on the job template, the following settings are general: Effective user defines the user for executing the job, by default it is the SSH user. Concurrency level defines the maximum number of jobs executed at once, which can prevent overload of systems' resources in a case of executing the job on a large number of hosts. Timeout to kill defines time interval in seconds after which the job should be killed, if it is not finished already. A task which could not be started during the defined interval, for example, if the task took too long to finish, is canceled. Type of query defines when the search query is evaluated. This helps to keep the query up to date for scheduled tasks. Execution ordering determines the order in which the job is executed on hosts: alphabetical or randomized. Concurrency level and Timeout to kill settings enable you to tailor job execution to fit your infrastructure hardware and needs. To run the job immediately, ensure that Schedule is set to Execute now . You can also define a one-time future job, or set up a recurring job. For recurring tasks, you can define start and end dates, number and frequency of runs. You can also use cron syntax to define repetition. For more information about cron, see the Automating System Tasks section of the Red Hat Enterprise Linux 7 System Administrator's Guide . Click Submit . This displays the Job Overview page, and when the job completes, also displays the status of the job. CLI procedure Enter the following command on Satellite: To execute a remote job with custom parameters, complete the following steps: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace query with the filter expression that defines hosts, for example "name ~ rex01" . For more information about executing remote commands with hammer, enter hammer job-template --help and hammer job-invocation --help . 3.16. Scheduling a Recurring Ansible Job for a Host You can schedule a recurring job to run Ansible roles on hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 3.17. Scheduling a Recurring Ansible Job for a Host Group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 3.18. Monitoring Jobs You can monitor the progress of the job while it is running. This can help in any troubleshooting that may be required. Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch. Procedure In the Satellite web UI, navigate to Monitor > Jobs . This page is automatically displayed if you triggered the job with the Execute now setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running. In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time. Click Back to Job at any time to return to the Job Details page. CLI procedure To monitor the progress of a job while it is running, complete the following steps: Find the ID of a job: Monitor the job output: Optional: to cancel a job, enter the following command: | [
"name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers",
"# hammer job-template create --file \" path_to_template_file \" --name \" template_name \" --provider-type SSH --job-category \" category_name \"",
"hammer settings set --name=remote_execution_fallback_proxy --value=true",
"hammer settings set --name=remote_execution_global_proxy --value=true",
"mkdir / remote_working_dir",
"chcon --reference=/var /remote_working_dir",
"satellite-installer --foreman-proxy-plugin-ansible-working-dir /remote_working_dir",
"ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]",
"ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]",
"mkdir ~/.ssh",
"curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys",
"chmod 700 ~/.ssh",
"chmod 600 ~/.ssh/authorized_keys",
"<%= snippet 'remote_execution_ssh_keys' %>",
"id -u foreman-proxy",
"umask 077",
"mkdir -p \"/var/kerberos/krb5/user/ USER_ID \"",
"cp your_client.keytab /var/kerberos/krb5/user/ USER_ID /client.keytab",
"chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ USER_ID \"",
"chmod -wx \"/var/kerberos/krb5/user/ USER_ID /client.keytab\"",
"restorecon -RvF /var/kerberos/krb5",
"satellite-installer --scenario satellite --foreman-proxy-plugin-remote-execution-ssh-ssh-kerberos-auth true",
"hammer settings set --name=remote_execution_global_proxy --value=false",
"hammer job-template list",
"hammer job-template info --id template_ID",
"# hammer job-invocation create --job-template \" template_name \" --inputs key1 =\" value \", key2 =\" value \",... --search-query \" query \"",
"# hammer job-invocation list",
"# hammer job-invocation output --id job_ID --host host_name",
"# hammer job-invocation cancel --id job_ID"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_red_hat_satellite_to_use_ansible/configuring_and_setting_up_remote_jobs_ansible |
Chapter 6. Catalog selection by labels or expressions | Chapter 6. Catalog selection by labels or expressions You can add metadata to a catalog by using labels in the custom resource (CR) of a cluster catalog. You can then filter catalog selection by specifying the assigned labels or using expressions in the CR of the cluster extension. The following cluster catalog CR adds the example.com/support label with the value of true to the catalog-a cluster catalog: Example cluster catalog CR with labels apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: catalog-a labels: example.com/support: "true" spec: source: type: Image image: ref: quay.io/example/content-management-a:latest The following cluster extension CR uses the matchLabels selector to select catalogs with the example.com/support label and the value of true : Example cluster extension CR with matchLabels selector apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchLabels: example.com/support: "true" You can use the matchExpressions field to perform more complex filtering for labels. The following cluster extension CR selects catalogs with the example.com/support label and a value of production or supported : Example cluster extension CR with matchExpression selector apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchExpressions: - key: example.com/support operator: In values: - "production" - "supported" Note If you use both the matchLabels and matchExpressions fields, the selected catalog must satisfy all specified criteria. | [
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: catalog-a labels: example.com/support: \"true\" spec: source: type: Image image: ref: quay.io/example/content-management-a:latest",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchLabels: example.com/support: \"true\"",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchExpressions: - key: example.com/support operator: In values: - \"production\" - \"supported\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extensions/olmv1-catalog-selection-by-labels-or-exp_catalog-content-resolution |
Chapter 4. Installing RHEL AI on IBM cloud | Chapter 4. Installing RHEL AI on IBM cloud For installing and deploying Red Hat Enterprise Linux AI on IBM Cloud, you must first convert the RHEL AI image into an IBM Cloud image. You can then launch an instance using the IBM Cloud image and deploy RHEL AI on an IBM Cloud machine. 4.1. Converting the RHEL AI image into a IBM Cloud image. To create a bootable image in IBM Cloud you must configure your IBM Cloud accounts, set up a Cloud Object Storage (COS) bucket, and create a IBM Cloud image using the RHEL AI image. Prerequisites You installed the IBM CLI on your specific machine. For more information about installing IBM Cloud CLI, see Installing the stand-alone IBM Cloud CLI . Procedure Log in to IBM Cloud with the following command: USD ibmcloud login When prompted, select your desired account to log in to. Example output of the login USD ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating... OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP' You need to set up various IBM Cloud configurations and create your COS bucket before generating a QCOW2 image. You can install the necessary IBM Cloud plugins by running the following command: USD ibmcloud plugin install cloud-object-storage infrastructure-service Set your preferred resource group, the following example command sets the resource group named Default . USD ibmcloud target -g Default Set your preferred region, the following example command sets the us-east region. USD ibmcloud target -r us-east You need to select a deployment plan for your service instance. Ensure you check the properties and pricing on the IBM cloud website. You can list the available deployment plans by running the following command: USD ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name' The following example command uses the premium-global-deployment plan and puts it in the environment variable cos_deploy_plan : USD cos_deploy_plan=premium-global-deployment Create a Cloud Object Storage (COS) service instance and save the name in an environment variable named cos_si_name and create the cloud-object-storage and by running the following commands: USD cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE USD ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan} Get the Cloud Resource Name (CRN) for your Cloud Object Storage (COS) bucket in a variable named cos_crn by running the following commands: USD cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains("cloud-object-storage")) | .crn') USD ibmcloud cos config crn --crn USD{cos_crn} --force Create your Cloud Object Storage (COS) bucket named as the environment variable bucket_name with the following commands: USD bucket_name=NAME_OF_MY_BUCKET USD ibmcloud cos bucket-create --bucket USD{bucket_name} Allow the infrastructure service to read the buckets that are in the service instance USD{cos_si_guid} variable by running the following commands: USD cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains("cloud-object-storage")) | .guid') USD ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid} Now that your IBM Cloud Object Storage (CoS) service instance bucket is set up, you need to download the QCOW2 image from Red Hat Enterprise Linux AI download page Copy the QCOW2 image link and add it to the following command: USD curl -Lo disk.qcow2 "PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE" Set the name you want to use as the RHEL AI IBM Cloud image USD image_name=rhel-ai-20240703v0 Upload the QCOW2 image to the Cloud Object Storage (COS) bucket by running the following command: USD ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region> Convert the QCOW2 you just uploaded to an IBM Cloud image with the following commands: USD ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol Once the job launches, set the IBM Cloud image configurations into a variable called image_id by running the following command: USD image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name=="'USDimage_name'") | .id') You can view the progress of the job with the following command: USD while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done You can view the information of the newly created image with the following command: USD ibmcloud is image USD{image_id} 4.2. Deploying your instance on IBM Cloud using the CLI You can launch an instance with your new RHEL AI IBM Cloud image from the IBM Cloud web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an IBM Cloud instance with the custom IBM Cloud image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI IBM Cloud image. For more information, see "Converting the RHEL AI image to an IBM Cloud image". You installed the IBM CLI on your specific machine, see Installing the stand-alone IBM Cloud CLI . You configured your Virtual private cloud (VPC). You created a subnet for your instance. Procedure Log in to your IBM Cloud account and select the Account, Region and Resource Group by running the following command: USD ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP> Before launching your IBM Cloud instance on the CLI, you need to create several configuration variables for your instance. Install the infrastructure-service plugin for IBM Cloud by running the following command USD ibmcloud plugin install infrastructure-service You need to create an SSH public key for your IBM Cloud account. IBM Cloud supports RSA and ed25519 keys. The following example command uses the ed25519 key types and names it ibmcloud . USD ssh-keygen -f ibmcloud -t ed25519 You can now upload the public key to your IBM Cloud account by following the example command. USD ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519 You need to create a Floating IP for your IBM Cloud instance by following the example command. Ensure you change the region to your preferred zone. USD ibmcloud is floating-ip-reserve my-public-ip --zone <region> You need to select the instance profile that you want to use for the deployment. List all the profiles by running the following command: USD ibmcloud is instance-profiles Make a note of your preferred instance profile, you will need it for your instance deployment. You can now start creating your IBM Cloud instance. Populate environment variables for when you create the instance. name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250 You can now launch your instance, by running the following command: USD ibmcloud is instance-create \ USDname \ USDvpc \ USDzone \ USDinstance_profile \ USDsubnet \ --image USDimage \ --keys USDsshkey \ --boot-volume '{"name": "'USD{name}'-boot", "volume": {"name": "'USD{name}'-boot", "capacity": 'USD{disk_size}', "profile": {"name": "general-purpose"}}}' \ --allow-ip-spoofing false Link the Floating IP to the instance by running the following command: USD ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train 4.3. Adding more storage to your IBM Cloud instance In [ibm-c], there is a size restriction of 250 GB of storage in the main IBM Cloud disk. RHEL AI might require more storage for models and generation data. You can add more storage by attaching an extra disk to your instance and using it to hold data for RHEL AI. Prerequisites You have a IBM Cloud RHEL AI instance. Procedure Create an environment variable called name that has the name of your instance by running the following command: USD name=my-rhelai-instance Set the size of the new volume by running the following command: USD data_volume_size=1000 Create and attach the instance volume by running the following command: USD ibmcloud is instance-volume-attachment-add data USD{name} \ --new-volume-name USD{name}-data \ --profile general-purpose \ --capacity USD{data_volume_size} You can list all the disks with the following command: USD lsblk Create a disk variable with the content of the disk path your using. The following example command uses the /dev/vdb path. USD disk=/dev/vdb Create a partition on your disk by running the following command: USD sgdisk -n 1:0:0 USDdisk Format and label the partition by running the following command: USD mkfs.xfs -L ilab-data USD{disk}1 You can configure your system to auto mount to your preferred directory. The following example command uses the /mnt directory. USD echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab Reload the systemd service to acknowledge the new configuration on mounts by running the following command: USD systemctl daemon-reload Mount the disk with the following command: USD mount -a Grant write permissions to all users in the new file system by running the following command: USD chmod 1777 /mnt/ 4.4. Adding a data storage directory to your instance By default RHEL AI holds configuration data in the USDHOME directory. You can change this default to a different directory for holding InstructLab data. Prerequisites You have a Red Hat Enterprise Linux AI instance You added an extra storage disk to your instance Procedure You can configure the ILAB_HOME environment variable by writing it to the USDHOME/.bash_profile file by running the following commands: USD echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile You can make that change effective by reloading the USDHOME/.bash_profile file with the following command: USD source USDHOME/.bash_profile | [
"ibmcloud login",
"ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'",
"ibmcloud plugin install cloud-object-storage infrastructure-service",
"ibmcloud target -g Default",
"ibmcloud target -r us-east",
"ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'",
"cos_deploy_plan=premium-global-deployment",
"cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE",
"ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}",
"cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')",
"ibmcloud cos config crn --crn USD{cos_crn} --force",
"bucket_name=NAME_OF_MY_BUCKET",
"ibmcloud cos bucket-create --bucket USD{bucket_name}",
"cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')",
"ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}",
"curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"",
"image_name=rhel-ai-20240703v0",
"ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>",
"ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol",
"image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')",
"while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done",
"ibmcloud is image USD{image_id}",
"ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>",
"ibmcloud plugin install infrastructure-service",
"ssh-keygen -f ibmcloud -t ed25519",
"ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519",
"ibmcloud is floating-ip-reserve my-public-ip --zone <region>",
"ibmcloud is instance-profiles",
"name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250",
"ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false",
"ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train",
"name=my-rhelai-instance",
"data_volume_size=1000",
"ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}",
"lsblk",
"disk=/dev/vdb",
"sgdisk -n 1:0:0 USDdisk",
"mkfs.xfs -L ilab-data USD{disk}1",
"echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab",
"systemctl daemon-reload",
"mount -a",
"chmod 1777 /mnt/",
"echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile",
"source USDHOME/.bash_profile"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/installing/installing_ibm_cloud |
Chapter 2. Known and fixed issues | Chapter 2. Known and fixed issues Learn about known issues in Data Grid and find out which issues are fixed. 2.1. Known Issues for Data Grid For issues that affect Data Grid clusters that you manage with Data Grid Operator, you should refer to the Data Grid Operator 8.5 release notes . JGroups address defaults to an external IP Issue: JDG-6053 Description: In bare metal deployments, when JGroups bind to an external IP without authentication configured by default, the connection is not secure, posing a risk of unauthorized access or manipulation. Workaround: Secure the connection in one of the following ways: Configure JGroups security to control the network so only authorized nodes can join. For more information see, Encrypting cluster transport . Use the -Djgroups.bind.address=<internal-network> parameter when starting Data Grid Server to set the JGroups address to secure internal network. Inconsistent transactions when network partitions occur Issue: JDG-3935 Description: In scenarios where a network partition occurs for a Data Grid cluster, transactions are rolled back after the partition is healed. Workaround: There is no workaround for this issue. Data Grid conflict resolution performance Issue: JDG-3636 Description: In some test cases, Data Grid partition handling functionality took longer than expected to perform conflict resolution. Workaround: There is no workaround for this issue. 2.2. Fixed in Data Grid 8.5.2 See Issues fixed in Red Hat Data Grid 8.5.2 to view the list of issues fixed in this release. 2.3. Fixed in Data Grid 8.5.1 See Issues fixed in Red Hat Data Grid 8.5.1 to view the list of issues fixed in this release. 2.4. Fixed in Data Grid 8.5.0 Data Grid 8.5.0 includes the following notable fixes: JDG-6918 View change during a cache join can lead to not replicating data JDG-6463 Elements in collections are not properly limited JDG-7061 Concrete config is validated before applying template configuration JDG-7095 Cross site view change event logs stale view. JDG-6986 Fix out-of-order query request serialization JDG-6431 Cache local address on demand 2.5. Host system and dependency issues In some cases Data Grid deployments can encounter errors that are caused by the host system or external dependency. This section provides details about any such known issues as well as troubleshooting and workaround procedures. Nashorn JavaScript engine If your Data Grid Server uses JavaScript to automate tasks, you must install the Nashorn JavaScript engine to ensure that these scripts can run on Data Grid 8.4. This is because OpenJDK 17 has removed support for the Nashorn JavaScript engine, its APIs, and the jjs tool. For a bare-metal Data Grid Server, you can install Nashorn from the Maven central repository by issuing the following command in your Data Grid CLI: bin/cli.sh install org.openjdk.nashorn:nashorn-core:15.4 \ org.ow2.asm:asm:7.3.1 \ org.ow2.asm:asm-util:7.3.1 On OpenShift, you can create an Infinispan Custom Resource (CR) that sets the Data Grid Operator to install Nashorn for your Data Grid cluster. For example: apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: artifacts: - maven: org.openjdk.nashorn:nashorn-core:15.4 - maven: org.ow2.asm:asm:7.3.1 - maven: org.ow2.asm:asm-util:7.3.1 service: type: DataGrid | [
"bin/cli.sh install org.openjdk.nashorn:nashorn-core:15.4 org.ow2.asm:asm:7.3.1 org.ow2.asm:asm-util:7.3.1",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: artifacts: - maven: org.openjdk.nashorn:nashorn-core:15.4 - maven: org.ow2.asm:asm:7.3.1 - maven: org.ow2.asm:asm-util:7.3.1 service: type: DataGrid"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/red_hat_data_grid_8.5_release_notes/rhdg-issues |
13.7. The Installation Summary Screen | 13.7. The Installation Summary Screen The Installation Summary screen is the central location for setting up an installation. Figure 13.4. The Installation Summary Screen Instead of directing you through consecutive screens, the Red Hat Enterprise Linux installation program allows you to configure your installation in the order you choose. Use your mouse to select a menu item to configure a section of the installation. When you have completed configuring a section, or if you would like to complete that section later, click the Done button located in the upper left corner of the screen. Only sections marked with a warning symbol are mandatory. A note at the bottom of the screen warns you that these sections must be completed before the installation can begin. The remaining sections are optional. Beneath each section's title, the current configuration is summarized. Using this you can determine whether you need to visit the section to configure it further. Once all required sections are complete, click the Begin Installation button. Also see Section 13.18, "Begin Installation" . To cancel the installation, click the Quit button. Note When related background tasks are running, certain menu items might be temporarily unavailable. If you used a Kickstart option or a boot command-line option to specify an installation repository on a network, but no network is available at the start of the installation, the installation program will display the configuration screen for you to set up a network connection prior to displaying the Installation Summary screen. Figure 13.5. Network Configuration Screen When No Network Is Detected You can skip this step if you are installing from an installation DVD or other locally accessible media, and you are certain you will not need network to finish the installation. However, network connectivity is necessary for network installations (see Section 8.11, "Installation Source" ) or for setting up advanced storage devices (see Section 8.15, "Storage Devices" ). For more details about configuring a network in the installation program, see Section 8.12, "Network & Hostname" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-graphical-installation-summary-ppc |
Chapter 9. Uninstalling a cluster on OpenStack | Chapter 9. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note If you deployed your cluster to the AWS C2S Secret Region, the installation program does not support destroying the cluster; you must manually remove the cluster resources. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/uninstalling-cluster-openstack |
Appendix A. Additional information | Appendix A. Additional information A.1. Configuration guidance The following configuration guidance is intended to provide a framework for creating Hyperconverged Infrastructure environments. This guidance is not intended to provide definitive configuration parameters for every Red Hat OpenStack Platform installation. Contact the Red Hat Customer Experience and Engagement team for specific guidance and suggestions that fit your specific environment. Cluster sizing and scale out Capacity planning and sizing A.1.1. Cluster sizing and scale out The Red Hat Ceph Storage Hardware Guide provides recommendations for IOPS optimized, throughput optimized, and cost and capacity optimized Ceph deployment scenarios. Follow the recommendation that best represents your deployment scenario and add the NICs, CPUs, and RAM required to support the Compute workload. An optimal, small footprint configuration consists of seven nodes. Unless you have a requirement for IOPS optimized performance in your environment and you are using all flash storage, the throughput optimized deployment scenario should be used. Three node Ceph Storage cluster configurations are possible. In this configuration, you should: use all flash storage. set the replica_count parameter to 3 in the ceph.conf file. set the min_size parameter to 2 in the ceph.conf file. If a node leaves service in this configuration, IOPS continue. To retain 3 copies of the data, replication to the third node is queued until it returns to service. Data is then backfilled to the third node. Note HCI configurations of up to 64 nodes have been tested. Some HCI environment examples have been documented up to 128 nodes. Large clusters such as these can be considered with a Support Exception and Consulting Services engagement. Contact the Red Hat Customer Experience and Engagement team for guidance. A deployment with two NUMA nodes can host a latency sensitive Compute workload on one NUMA node and Ceph OSDs services on the other. If there are network interfaces on both nodes, and the disk controllers are on node 0, use a network interface on node 0 for the Storage network and host the Ceph OSD workload on node 0. Host the Compute workload on node 1 and configure it to use the network interfaces on node 1. When acquiring hardware for your deployment, be mindful of which NICs will use which nodes and attempt to split them between storage and workload. A.1.2. Capacity planning and sizing The throughput optimized Ceph solution defined in the Red Hat Ceph Storage Hardware Guide provides a balanced solution for most deployments that do not require optimization for IOPS. In addition to the configuration guidelines provided with the solution, note the following when creating your environment: The allotment of 5 GB of RAM per OSD ensures OSDs have sufficient operational memory. Ensure your hardware can support this requirement. CPU speed should match the storage medium in use. The advantages of faster storage mediums such as SSDs can be negated by CPUs too slow to support them. Similarly, a fast CPU can be more efficiently used by faster storage mediums. Balance CPU and storage medium speed so that neither becomes a bottleneck for the other. A.2. Guides and resources for the configuration of your hyperconverged infrastructure environment The following guides contain additional information and procedures that can aid in the configuration of your hyperconverged infrastructure environment. Deploying Red Hat Ceph and OpenStack together with director This guide provides information about using the Red Hat OpenStack Platform director to create an overcloud with a Red Hat Ceph Storage cluster. This includes instructions for customizing your Ceph cluster through the director. Director Installation and Usage This guide provides guidance on the end-to-end deployment of a Red Hat OpenStack Platform environment. This includes installing the director, planning your environment, and creating an OpenStack environment with the director. Networking Guide This guide provides details on Red Hat OpenStack Platform networking tasks. Storage Guide This guide details the different procedures for using and managing persistent storage in a Red Hat OpenStack Platform environment. It also includes procedures for configuring and managing the respective OpenStack service of each persistent storage type. Bare Metal Provisioning This guide provides details on the installation and configuration of the Bare Metal Provisioning service in the overcloud of a Red Hat OpenStack Platform environment to provision and manage physical machines for cloud users. Security and Hardening Guide This guide provides good practice advice and conceptual information about hardening the security of a Red Hat OpenStack Platform environment. Release Notes This document outlines the major features, enhancements, and known issues in this release of Red Hat OpenStack Platform. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/hyperconverged_infrastructure_guide/additional_information |
16.2. MapReduceTask Distributed Execution | 16.2. MapReduceTask Distributed Execution MapReduceTask is a distributed task that allows a large-scale computation to be transparently parallelized across Red Hat JBoss Data Grid cluster nodes. MapReduceTask can be instantiated with a reference to a cache containing data that is used as input for this task. JBoss Data Grid's execution environment can migrate and execute instances of provided Mapper and Reducer seamlessly across JBoss Data Grid nodes. Note Distributed execution does not work for local caches; it is recommended to run a replicated or distributed cache if this functionality is intended to be used. MapReduceTask distributed execution distributes the reduce phase execution. Previously, the reduce phase was performed on a single master task node. This limitation has been removed, and the reduce phase execution can now be distributed across the cluster also. The distribution of reduce phase is achieved by relying on consistent hashing. It is still possible to use MapReduceTask with the reduce phase performed on a single node, and this is recommended for smaller input tasks. Distributed Execution of the MapReduceTask occurs in three phases: Mapping phase. Outgoing Key and Outgoing Value Migration. Reduce phase. Map Phase MapReduceTask hashes task input keys and groups them by the execution node that they are hashed to. After key node mapping, MapReduceTask sends a map function and inputs keys to each node. The map function is invoked using given keys and locally loaded corresponding values. Results are collected with a Red Hat JBoss Data Grid supplied Collector, and the combine phase is initiated. A Combiner, if specified, takes KOut keys and immediately invokes the reduce phase on keys. The result of mapping phase executed on each node is KOut/VOut map. There is one resulting map per execution node per launched MapReduceTask. Figure 16.1. Map Phase Intermediate KOut/VOut migration phase In order to proceed with reduce phase, all intermediate keys and values must be grouped by intermediate KOut keys. As map phases around the cluster can produce identical intermediate keys, all identical intermediate keys and their values must be grouped before reduce is executed on any particular intermediate key. At the end of the combine phase, each intermediate KOut key is hashed and migrated with its VOut values to the JBoss Data Grid node where keys KOut are hashed to. This is achieved using a temporary distributed cache and underlying consistent hashing mechanism. Figure 16.2. Kout/VOut Migration Once Map and Combine phase have finished execution, a list of KOut keys is returned to a master node and it is initiating MapReduceTask. VOut values are not returned as they are not required at the master task node. MapReduceTask is ready to start with reduce phase. Reduce Phase To complete reduce phase, MapReduceTask groups KOut keys by execution node N they are hashed to. For each node and its grouped input KOut keys, MapReduceTask sends a reduce command to a node where KOut keys are hashed. Once the reduce command is executed on the target execution node, it locates the temporary cache belonging to MapReduce task. For each KOut key, the reduce command obtains a list of VOut values, wraps it with an Iterator, and invokes reduce on it. Figure 16.3. Reduce Phase The result of each reduce is a map where each key is KOut and value is VOut. Each JBoss Data Grid execution node returns one map with KOut/VOut result values. As all initiated reduce commands return to a calling node, MapReduceTask combines all resulting maps into a map and returns the map as a result of MapReduceTask. Distributed reduce phase is enabled by using a MapReduceTask constructor specifying the cache to use as input data for the task and boolean parameter distributeReducePhase set to true . For more information, see the Map/Reduce section of the Red Hat JBoss Data Grid API Documentation . 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/mapreducetask_distributed_execution |
Chapter 34. Cluster recovery from persistent volumes | Chapter 34. Cluster recovery from persistent volumes You can recover a Kafka cluster from persistent volumes (PVs) if they are still present. 34.1. Cluster recovery scenarios Recovering from PVs is possible in the following scenarios: Unintentional deletion of a namespace Loss of an entire OpenShift cluster while PVs remain in the infrastructure The recovery procedure for both scenarios is to recreate the original PersistentVolumeClaim (PVC) resources. 34.1.1. Recovering from namespace deletion When you delete a namespace, all resources within that namespace-including PVCs, pods, and services-are deleted. If the reclaimPolicy for the PV resource specification is set to Retain , the PV retains its data and is not deleted. This configuration allows you to recover from namespace deletion. PV configuration to retain data apiVersion: v1 kind: PersistentVolume # ... spec: # ... persistentVolumeReclaimPolicy: Retain Alternatively, PVs can inherit the reclaim policy from an associated storage class. Storage classes are used for dynamic volume allocation. By configuring the reclaimPolicy property for the storage class, PVs created with this class use the specified reclaim policy. The storage class is assigned to the PV using the storageClassName property. Storage class configuration to retain data apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # ... # ... reclaimPolicy: Retain Storage class specified for PV apiVersion: v1 kind: PersistentVolume # ... spec: # ... storageClassName: gp2-retain Note When using Retain as the reclaim policy, you must manually delete PVs if you intend to delete the entire cluster. 34.1.2. Recovering from cluster loss If you lose the entire OpenShift cluster, all resources-including PVs, PVCs, and namespaces-are lost. However, it's possible to recover if the physical storage backing the PVs remains intact. To recover, you need to set up a new OpenShift cluster and manually reconfigure the PVs to use the existing storage. 34.2. Recovering a deleted KRaft-based Kafka cluster This procedure describes how to recover a deleted Kafka cluster operating in KRaft mode from persistent volumes (PVs) by recreating the original PersistentVolumeClaim (PVC) resources. If the Topic Operator and User Operator are deployed, you can recover KafkaTopic and KafkaUser resources by recreating them. It is important that you recreate the KafkaTopic resources with the same configurations, or the Topic Operator will try to update them in Kafka. This procedure shows how to recreate both resources. Warning If the User Operator is enabled and Kafka users are not recreated, users are deleted from the Kafka cluster immediately after recovery. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see Section 10.5, "Configuring Kafka storage" . Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example PV output NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-broker-0 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-broker-1 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-broker-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-controller-3 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-controller-4 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-controller-5 NAME is the name of each PV. RECLAIMPOLICY shows that PVs are retained, meaning that the PV is not automatically deleted when the PVC is deleted. CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Here, we recreate the myproject namespace. Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: Example PVC resource specification apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-broker-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. Example PV specification apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-broker-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator: oc create -f install/cluster-operator -n myproject Recreate all KafkaTopic resources by applying the KafkaTopic resource configuration: oc apply -f <topic_configuration_file> -n myproject Recreate all KafkaUser resources: If user passwords and certificates need to be retained, recreate the user secrets before recreating the KafkaUser resources. If the secrets are not recreated, the User Operator will generate new credentials automatically. Ensure that the recreated secrets have exactly the same name, labels, and fields as the original secrets. Apply the KafkaUser resource configuration: oc apply -f <user_configuration_file> -n myproject Deploy the Kafka cluster using the original configuration for the Kafka resource. Add the annotation strimzi.io/pause-reconciliation="true" to the original configuration for the Kafka resource, and then deploy the Kafka cluster using the updated configuration. oc apply -f <kafka_resource_configuration>.yaml -n myproject Recover the original clusterId from logs or copies of the Kafka custom resource. Otherwise, you can retrieve it from one of the volumes by spinning up a temporary pod. PVC_NAME="data-0-my-cluster-kafka-0" COMMAND="grep cluster.id /disk/kafka-log*/meta.properties | awk -F'=' '{print \USD2}'" oc run tmp -itq --rm --restart "Never" --image "foo" --overrides "{\"spec\": {\"containers\":[{\"name\":\"busybox\",\"image\":\"busybox\",\"command\":[\"/bin/sh\", \"-c\",\"USDCOMMAND\"],\"volumeMounts\":[{\"name\":\"disk\",\"mountPath\":\"/disk\"}]}], \"volumes\":[{\"name\":\"disk\",\"persistentVolumeClaim\":{\"claimName\": \"USDPVC_NAME\"}}]}}" -n myproject Edit the Kafka resource to set the .status.clusterId with the recovered value: oc edit kafka <cluster-name> --subresource status -n myproject Unpause the Kafka resource reconciliation: oc annotate kafka my-cluster strimzi.io/pause-reconciliation=false \ --overwrite -n myproject Verify the recovery of the KafkaTopic resources: oc get kafkatopics -o wide -w -n myproject Kafka topic status NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 True my-topic-3 my-cluster 10 3 True KafkaTopic custom resource creation is successful when the READY output shows True . Verify the recovery of the KafkaUser resources: oc get kafkausers -o wide -w -n myproject Kafka user status NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple True my-user-3 my-cluster tls simple True KafkaUser custom resource creation is successful when the READY output shows True . 34.3. Recovering a deleted ZooKeeper-based Kafka cluster This procedure describes how to recover a deleted Kafka cluster operating in a ZooKeeper-based environment from persistent volumes (PVs) by recreating the original PersistentVolumeClaim (PVC) resources. If the Topic Operator and User Operator are deployed, you can recover KafkaTopic and KafkaUser resources by recreating them. It is important that you recreate the KafkaTopic resources with the same configurations, or the Topic Operator will try to update them in Kafka. This procedure shows how to recreate both resources. Warning If the User Operator is enabled and Kafka users are not recreated, users are deleted from the Kafka cluster immediately after recovery. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see Section 10.5, "Configuring Kafka storage" . Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example PV output NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2 NAME is the name of each PV. RECLAIMPOLICY shows that PVs are retained, meaning that the PV is not automatically deleted when the PVC is deleted. CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Here, we recreate the myproject namespace. Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: Example PVC resource specification apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. Example PV specification apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator: oc create -f install/cluster-operator -n myproject Recreate all KafkaTopic resources by applying the KafkaTopic resource configuration: oc apply -f <topic_configuration_file> -n myproject Recreate all KafkaUser resources: If user passwords and certificates need to be retained, recreate the user secrets before recreating the KafkaUser resources. If the secrets are not recreated, the User Operator will generate new credentials automatically. Ensure that the recreated secrets have exactly the same name, labels, and fields as the original secrets. Apply the KafkaUser resource configuration: oc apply -f <user_configuration_file> -n myproject Deploy the Kafka cluster using the original configuration for the Kafka resource. oc apply -f <kafka_resource_configuration>.yaml -n myproject Verify the recovery of the KafkaTopic resources: oc get kafkatopics -o wide -w -n myproject Kafka topic status NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 True my-topic-3 my-cluster 10 3 True KafkaTopic custom resource creation is successful when the READY output shows True . Verify the recovery of the KafkaUser resources: oc get kafkausers -o wide -w -n myproject Kafka user status NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple True my-user-3 my-cluster tls simple True KafkaUser custom resource creation is successful when the READY output shows True . | [
"apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain",
"apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain",
"apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain",
"get pv",
"NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-broker-0 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-broker-1 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-broker-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-controller-3 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-controller-4 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-controller-5",
"create namespace myproject",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-broker-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c",
"apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem",
"claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-broker-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea",
"create -f install/cluster-operator -n myproject",
"apply -f <topic_configuration_file> -n myproject",
"apply -f <user_configuration_file> -n myproject",
"apply -f <kafka_resource_configuration>.yaml -n myproject",
"PVC_NAME=\"data-0-my-cluster-kafka-0\" COMMAND=\"grep cluster.id /disk/kafka-log*/meta.properties | awk -F'=' '{print \\USD2}'\" run tmp -itq --rm --restart \"Never\" --image \"foo\" --overrides \"{\\\"spec\\\": {\\\"containers\\\":[{\\\"name\\\":\\\"busybox\\\",\\\"image\\\":\\\"busybox\\\",\\\"command\\\":[\\\"/bin/sh\\\", \\\"-c\\\",\\\"USDCOMMAND\\\"],\\\"volumeMounts\\\":[{\\\"name\\\":\\\"disk\\\",\\\"mountPath\\\":\\\"/disk\\\"}]}], \\\"volumes\\\":[{\\\"name\\\":\\\"disk\\\",\\\"persistentVolumeClaim\\\":{\\\"claimName\\\": \\\"USDPVC_NAME\\\"}}]}}\" -n myproject",
"edit kafka <cluster-name> --subresource status -n myproject",
"annotate kafka my-cluster strimzi.io/pause-reconciliation=false --overwrite -n myproject",
"get kafkatopics -o wide -w -n myproject",
"NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 True my-topic-3 my-cluster 10 3 True",
"get kafkausers -o wide -w -n myproject",
"NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple True my-user-3 my-cluster tls simple True",
"get pv",
"NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2",
"create namespace myproject",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c",
"apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem",
"claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea",
"create -f install/cluster-operator -n myproject",
"apply -f <topic_configuration_file> -n myproject",
"apply -f <user_configuration_file> -n myproject",
"apply -f <kafka_resource_configuration>.yaml -n myproject",
"get kafkatopics -o wide -w -n myproject",
"NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 True my-topic-3 my-cluster 10 3 True",
"get kafkausers -o wide -w -n myproject",
"NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple True my-user-3 my-cluster tls simple True"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-cluster-recovery-volume-str |
10.7. Operation Editor | 10.7. Operation Editor Editing of Web Service Operation transformations is simplified via the Operation Editor. When editing a Web Service model, an additional editor tab labeled Operation Editor is available. This editor, shown below is comprised of: Operations section showing a tree view of Interfaces and Operations contained within the Web Service model. Input Variables section providing editing of desired Input Variable declarations. Procedure section providing SQL editing of the procedure. Figure 10.32. Operation Editor The Operations section contains all interfaces and operations currently defined in the model. Selecting an operation will display the variables related to the input parameter's content in the Input Variables section and the body of its procedure (minus the CREATE VIRTUAL PROCEDURE BEGIN - END keywords and the input variable declarations and assignments) in the Procedure section. When pasting in SQL, do not include the CREATE VIRTUAL PROCEDURE BEGIN - END keywords. Input variables will be automatically generated when the Content via Element property is set on an operation's input parameter. Input variables may be edited using the Add or Remove link in the Input Variables section, and may only represent XPath values to single attributes and elements within the input contents; other variable declarations and assignments must be typed directly into the Procedure section. Clicking the Add or Remove link will display the following dialog: Figure 10.33. Edit Input Variables Dialog | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/operation_editor |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/api_documentation/making-open-source-more-inclusive |
Chapter 9. Dynamic programming languages, web servers, database servers | Chapter 9. Dynamic programming languages, web servers, database servers The following chapter contains the most notable changes to dynamic programming languages, web servers, and database servers between RHEL 8 and RHEL 9. 9.1. Notable changes to dynamic programming languages, web and database servers Initial Application Streams versions in RHEL 9 RHEL 9 improves the Application Streams experience by providing initial Application Stream versions that can be installed as RPM packages using the traditional dnf install command. RHEL 9.0 provides the following dynamic programming languages: Node.js 16 Perl 5.32 PHP 8.0 Python 3.9 Ruby 3.0 RHEL 9.0 includes the following version control systems: Git 2.31 Subversion 1.14 The following web servers are distributed with RHEL 9.0: Apache HTTP Server 2.4 nginx 1.20 The following proxy caching servers are available: Varnish Cache 6.6 Squid 5.2 RHEL 9.0 offers the following database servers: MariaDB 10.5 MySQL 8.0 PostgreSQL 13 Redis 6.2 Some additional Application Stream versions will be distributed as modules with a shorter life cycle in future minor RHEL 9 releases. Major differences in the Python ecosystem since RHEL 8 The unversioned python command The unversioned form of the python command ( /usr/bin/python ) is available in the python-unversioned-command package. On some systems, this package is not installed by default. To install the unversioned form of the python command manually, use the dnf install /usr/bin/python command. In RHEL 9, the unversioned form of the python command points to the default Python 3.9 version and it is an equivalent to the python3 and python3.9 commands. In RHEL 9, you cannot configure the unversioned command to point to a different version than Python 3.9. The python command is intended for interactive sessions. In production, it is recommended to use python3 , python3.9 , or python3.11 explicitly. You can uninstall the unversioned python command by using the dnf remove /usr/bin/python command. If you need a different python or python3 command, you can create custom symlinks in /usr/local/bin or ~/.local/bin , or use a Python virtual environment. Several other unversioned commands are available, such as /usr/bin/pip in the python3-pip package. In RHEL 9, all unversioned commands point to the default Python 3.9 version. Architecture-specific Python wheels Architecture-specific Python wheels built on RHEL 9 newly adhere to the upstream architecture naming, which allows customers to build their Python wheels on RHEL 9 and install them on non-RHEL systems. Python wheels built on releases of RHEL are forward compatible and can be installed on RHEL 9. Note that this affects only wheels containing Python extensions, which are built for each architecture, not Python wheels with pure Python code, which is not architecture-specific. Differences between the perl and perl-interpreter packages RHEL 9 provides both the perl and perl-interpreter packages. The perl package is suitable for development because it contains the full Perl upstream distribution in dependencies, including GCC. On production systems, use the perl-interpreter package, which contains the main /usr/bin/perl interpreter. Notable changes to libdb RHEL 8 and RHEL 9 currently provide Berkeley DB ( libdb ) version 5.3.28, which is distributed under the LGPLv2 license. The upstream Berkeley DB version 6 is available under the AGPLv3 license, which is more restrictive. The libdb package is deprecated as of RHEL 9 and might not be available in future major RHEL releases. Cryptographic algorithms have been removed from libdb in RHEL 9. Multiple libdb dependencies have been removed from RHEL 9. Users of libdb are advised to migrate to a different key-value database. For more information, see the Knowledgebase article Available replacements for the deprecated Berkeley DB (libdb) in RHEL . Tomcat available since RHEL 9.2 RHEL 9.2 introduces the Apache Tomcat server version 9. Tomcat is the servlet container that is used in the official Reference Implementation for the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed by Sun under the Java Community Process. Tomcat is developed in an open and participatory environment and released under the Apache Software License version 2.0. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_dynamic-programming-languages-web-servers-database-servers_considerations-in-adopting-rhel-9 |
Data Grid Cross-Site Replication | Data Grid Cross-Site Replication Red Hat Data Grid 8.5 Back up data between Data Grid clusters Red Hat Customer Content Services | [
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\" /> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\" } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\"",
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\"> <take-offline after-failures=\"5\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\", \"take-offline\" : { \"after-failures\" : \"5\" } } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\" takeOffline: afterFailures: \"5\"",
"<take-offline after-failures=\"-1\" min-wait=\"10000\"/>",
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\"> <take-offline after-failures=\"5\" min-wait=\"15000\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\", \"take-offline\" : { \"after-failures\" : \"5\", \"min-wait\" : \"15000\" } } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\" takeOffline: afterFailures: \"5\" minWait: \"15000\"",
"LON NYC k1=(n/a) 0,0 0,0 k1=2 1,0 --> 1,0 k1=2 k1=3 1,1 <-- 1,1 k1=3 k1=5 2,1 1,2 k1=8 --> 2,1 (conflict) (conflict) 1,2 <--",
"<infinispan> <jgroups> <stack name=\"xsite\" extends=\"udp\"> <relay.RELAY2 xmlns=\"urn:org:jgroups\" site=\"LON\" max_site_masters=\"1000\"/> <remote-sites default-stack=\"tcp\"> <remote-site name=\"LON\"/> <remote-site name=\"NYC\"/> </remote-sites> </stack> </jgroups> <cache-container> <transport cluster=\"USD{cluster.name}\" stack=\"xsite\"/> </cache-container> </infinispan>",
"<infinispan> <jgroups> <stack name=\"relay-global\" extends=\"tcp\"> <TCPPING initial_hosts=\"192.0.2.0[7800]\" stack.combine=\"REPLACE\" stack.position=\"MPING\"/> </stack> <stack name=\"xsite\" extends=\"udp\"> <relay.RELAY2 site=\"LON\" xmlns=\"urn:org:jgroups\" max_site_masters=\"10\" can_become_site_master=\"true\"/> <remote-sites default-stack=\"relay-global\"> <remote-site name=\"LON\"/> <remote-site name=\"NYC\"/> </remote-sites> </stack> </jgroups> </infinispan>",
"<replicated-cache name=\"customers\"> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" /> </backups> </replicated-cache>",
"{ \"replicated-cache\": { \"name\": \"customers\", \"backups\": { \"NYC\": { \"backup\" : { \"strategy\" : \"ASYNC\" } } } } }",
"replicatedCache: name: \"customers\" backups: NYC: backup: strategy: \"ASYNC\"",
"<distributed-cache name=\"customers\"> <backups> <backup site=\"LON\" strategy=\"ASYNC\" /> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"name\": \"customers\", \"backups\": { \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: name: \"customers\" backups: LON: backup: strategy: \"ASYNC\"",
"<distributed-cache name=\"eu-customers\"> <backups> <backup site=\"LON\" strategy=\"ASYNC\" /> </backups> <backup-for remote-cache=\"customers\" remote-site=\"LON\" /> </distributed-cache>",
"{ \"distributed-cache\": { \"name\": \"eu-customers\", \"backups\": { \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } }, \"backup-for\" : { \"remote-cache\" : \"customers\", \"remote-site\" : \"LON\" } } }",
"distributedCache: name: \"eu-customers\" backups: LON: backup: strategy: \"ASYNC\" backupFor: remoteCache: \"customers\" remoteSite: \"LON\"",
"<distributed-cache name=\"eu-customers\"> <backups> <backup site=\"LON\" strategy=\"ASYNC\"> <state-transfer chunk-size=\"600\" timeout=\"2400000\" max-retries=\"30\" wait-time=\"2000\" mode=\"AUTO\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"name\": \"eu-customers\", \"backups\": { \"LON\": { \"backup\": { \"strategy\": \"ASYNC\", \"state-transfer\": { \"chunk-size\": \"600\", \"timeout\": \"2400000\", \"max-retries\": \"30\", \"wait-time\": \"2000\", \"mode\": \"AUTO\" } } } } } }",
"distributedCache: name: \"eu-customers\" backups: LON: backup: strategy: \"ASYNC\" stateTransfer: chunkSize: \"600\" timeout: \"2400000\" maxRetries: \"30\" waitTime: \"2000\" mode: \"AUTO\"",
"<distributed-cache> <backups merge-policy=\"ALWAYS_REMOVE\"> <backup site=\"LON\" strategy=\"ASYNC\"/> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"merge-policy\": \"ALWAYS_REMOVE\", \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: backups: mergePolicy: \"ALWAYS_REMOVE\" LON: backup: strategy: \"ASYNC\"",
"<distributed-cache> <backups merge-policy=\"org.mycompany.MyCustomXSiteEntryMergePolicy\"> <backup site=\"LON\" strategy=\"ASYNC\"/> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"merge-policy\": \"org.mycompany.MyCustomXSiteEntryMergePolicy\", \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: backups: mergePolicy: \"org.mycompany.MyCustomXSiteEntryMergePolicy\" LON: backup: strategy: \"ASYNC\"",
"<distributed-cache> <backups tombstone-map-size=\"512000\" max-cleanup-delay=\"30000\"> <backup site=\"LON\" strategy=\"ASYNC\"/> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"tombstone-map-size\": 512000, \"max-cleanup-delay\": 30000, \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: backups: tombstoneMapSize: 512000 maxCleanupDelay: 30000 LON: backup: strategy: \"ASYNC\"",
"INFO [org.infinispan.XSITE] (jgroups-5,<server-hostname>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,<server-hostname>) ISPN000439: Received new x-site view: [LON, NYC]",
"Servers at the active site infinispan.client.hotrod.server_list = LON_host1:11222,LON_host2:11222,LON_host3:11222 Servers at the backup site infinispan.client.hotrod.cluster.NYC = NYC_hostA:11222,NYC_hostB:11222,NYC_hostC:11222,NYC_hostD:11222",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServers(\"LON_host1:11222;LON_host2:11222;LON_host3:11222\") .addCluster(\"NYC\") .addClusterNodes(\"NYC_hostA:11222;NYC_hostB:11222;NYC_hostC:11222;NYC_hostD:11222\")",
"site status --cache=cacheName --site=NYC",
"site bring-online --cache=customers --site=NYC",
"site take-offline --cache=customers --site=NYC",
"site state-transfer-mode get --cache=cacheName --site=NYC",
"site state-transfer-mode set --cache=cacheName --site=NYC --mode=AUTO",
"site push-site-state --cache=cacheName --site=NYC",
"GET /rest/v2/caches/{cacheName}/x-site/backups/",
"{ \"NYC\": { \"status\": \"online\" }, \"LON\": { \"status\": \"mixed\", \"online\": [ \"NodeA\" ], \"offline\": [ \"NodeB\" ] } }",
"GET /rest/v2/caches/{cacheName}/x-site/backups/{siteName}",
"{ \"NodeA\":\"offline\", \"NodeB\":\"online\" }",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=take-offline",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=bring-online",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=start-push-state",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=cancel-push-state",
"GET /rest/v2/caches/{cacheName}/x-site/backups?action=push-state-status",
"{ \"NYC\":\"CANCELED\", \"LON\":\"OK\" }",
"POST /rest/v2/caches/{cacheName}/x-site/local?action=clear-push-state-status",
"GET /rest/v2/caches/{cacheName}/x-site/backups/{siteName}/take-offline-config",
"{ \"after_failures\": 2, \"min_wait\": 1000 }",
"PUT /rest/v2/caches/{cacheName}/x-site/backups/{siteName}/take-offline-config",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=cancel-receive-state",
"GET /rest/v2/container/x-site/backups/",
"{ \"SFO-3\":{ \"status\":\"online\" }, \"NYC-2\":{ \"status\":\"mixed\", \"online\":[ \"CACHE_1\" ], \"offline\":[ \"CACHE_2\" ], \"mixed\": [ \"CACHE_3\" ] } }",
"GET /rest/v2/container/x-site/backups/{site}",
"POST /rest/v2/container/x-site/backups/{siteName}?action=take-offline",
"POST /rest/v2/container/x-site/backups/{siteName}?action=bring-online",
"GET /rest/v2/caches/{cacheName}/x-site/backups/{site}/state-transfer-mode",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{site}/state-transfer-mode?action=set&mode={mode}",
"POST /rest/v2/container/x-site/backups/{siteName}?action=start-push-state",
"POST /rest/v2/container/x-site/backups/{siteName}?action=cancel-push-state",
"<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\""
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html-single/data_grid_cross-site_replication/index |
Chapter 3. Managing resource servers | Chapter 3. Managing resource servers According to the OAuth2 specification, a resource server is a server hosting the protected resources and capable of accepting and responding to protected resource requests. In Red Hat build of Keycloak, resource servers are provided with a rich platform for enabling fine-grained authorization for their protected resources, where authorization decisions can be made based on different access control mechanisms. Any client application can be configured to support fine-grained permissions. In doing so, you are conceptually turning the client application into a resource server. 3.1. Creating a client application The first step to enable Red Hat build of Keycloak Authorization Services is to create the client application that you want to turn into a resource server. Procedure Click Clients . Clients On this page, click Create . Add Client Type the Client ID of the client. For example, my-resource-server . Type the Root URL for your application. For example: Click Save . The client is created and the client Settings page opens. A page similar to the following is displayed: Client Settings 3.2. Enabling authorization services You can turn your OIDC client into a resource server and enable fine-grained authorization. Procedure Toggle Authorization Enabled to `On . Click Save . Enabling authorization services A new Authorization tab is displayed for this client. Click the Authorization tab and a page similar to the following is displayed: Resource server settings The Authorization tab contains additional sub-tabs covering the different steps that you must follow to actually protect your application's resources. Each tab is covered separately by a specific topic in this documentation. But here is a quick description about each one: Settings General settings for your resource server. For more details about this page see the Resource Server Settings section. Resource From this page, you can manage your application's resources . Authorization Scopes From this page, you can manage scopes . Policies From this page, you can manage authorization policies and define the conditions that must be met to grant a permission. Permissions From this page, you can manage the permissions for your protected resources and scopes by linking them with the policies you created. Evaluate From this page, you can simulate authorization requests and view the result of the evaluation of the permissions and authorization policies you have defined. Export Settings From this page, you can export the authorization settings to a JSON file. 3.2.1. Resource server settings On the Resource Server Settings page, you can configure the policy enforcement mode, allow remote resource management, and export the authorization configuration settings. Policy Enforcement Mode Specifies how policies are enforced when processing authorization requests sent to the server. Enforcing (default mode) Requests are denied by default even when there is no policy associated with a given resource. Permissive Requests are allowed even when there is no policy associated with a given resource. Disabled Disables the evaluation of all policies and allows access to all resources. Decision Strategy This configurations changes how the policy evaluation engine decides whether or not a resource or scope should be granted based on the outcome from all evaluated permissions. Affirmative means that at least one permission must evaluate to a positive decision in order grant access to a resource and its scopes. Unanimous means that all permissions must evaluate to a positive decision in order for the final decision to be also positive. As an example, if two permissions for a same resource or scope are in conflict (one of them is granting access and the other is denying access), the permission to the resource or scope will be granted if the chosen strategy is Affirmative . Otherwise, a single deny from any permission will also deny access to the resource or scope. Remote Resource Management Specifies whether resources can be managed remotely by the resource server. If false, resources can be managed only from the administration console. 3.3. Default Configuration When you create a resource server, Red Hat build of Keycloak creates a default configuration for your newly created resource server. The default configuration consists of: A default protected resource representing all resources in your application. A policy that always grants access to the resources protected by this policy. A permission that governs access to all resources based on the default policy. The default protected resource is referred to as the default resource and you can view it if you navigate to the Resources tab. Default resource This resource defines a Type , namely urn:my-resource-server:resources:default and a URI /* . Here, the URI field defines a wildcard pattern that indicates to Red Hat build of Keycloak that this resource represents all the paths in your application. In other words, when enabling policy enforcement for your application, all the permissions associated with the resource will be examined before granting access. The Type mentioned previously defines a value that can be used to create typed resource permissions that must be applied to the default resource or any other resource you create using the same type. The default policy is referred to as the only from realm policy and you can view it if you navigate to the Policies tab. Default policy This policy is a JavaScript-based policy defining a condition that always grants access to the resources protected by this policy. If you click this policy you can see that it defines a rule as follows: // by default, grants any permission associated with this policy USDevaluation.grant(); Lastly, the default permission is referred to as the default permission and you can view it if you navigate to the Permissions tab. Default Permission This permission is a resource-based permission , defining a set of one or more policies that are applied to all resources with a given type. 3.3.1. Changing the default configuration You can change the default configuration by removing the default resource, policy, or permission definitions and creating your own. The default resource is created with a URI that maps to any resource or path in your application using a / * pattern. Before creating your own resources, permissions and policies, make sure the default configuration doesn't conflict with your own settings. Note The default configuration defines a resource that maps to all paths in your application. If you are about to write permissions to your own resources, be sure to remove the Default Resource or change its URIS fields to a more specific paths in your application. Otherwise, the policy associated with the default resource (which by default always grants access) will allow Red Hat build of Keycloak to grant access to any protected resource. 3.4. Export and import authorization configuration The configuration settings for a resource server (or client) can be exported and downloaded. You can also import an existing configuration file for a resource server. Importing and exporting a configuration file is helpful when you want to create an initial configuration for a resource server or to update an existing configuration. The configuration file contains definitions for: Protected resources and scopes Policies Permissions 3.4.1. Exporting a configuration file Procedure Click Clients in the menu. Click the client you created as a resource server. Click the Export tab. Export Settings The configuration file is exported in JSON format and displayed in a text area, from which you can copy and paste. You can also click Download to download the configuration file and save it. 3.4.2. Importing a configuration file You can import a configuration file for a resource server. Procedure Navigate to the Resource Server Settings page. Import Settings Click Import and choose a file containing the configuration that you want to import. | [
"http://USD{host}:USD{port}/my-resource-server",
"// by default, grants any permission associated with this policy USDevaluation.grant();"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/authorization_services_guide/resource_server_overview |
Chapter 3. Configuring the Collector | Chapter 3. Configuring the Collector 3.1. Configuring the Collector The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file. 3.1.1. OpenTelemetry Collector configuration options The OpenTelemetry Collector consists of five types of components that access telemetry data: Receivers Processors Exporters Connectors Extensions You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice, only enable the components that you need. Example of the OpenTelemetry Collector custom resource file apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus] 1 If a component is configured but not defined in the service section, the component is not enabled. Table 3.1. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger , prometheus , zipkin , kafka , opencensus None Processors run through the received data before it is exported. By default, no processors are enabled. batch , memory_limiter , resourcedetection , attributes , span , k8sattributes , filter , routing None An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. otlp , otlphttp , debug , prometheus , kafka None Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. spanmetrics None Optional components for tasks that do not involve processing telemetry data. bearertokenauth , oauth2client , jaegerremotesampling , pprof , health_check , memory_ballast , zpages None Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None You enable receivers for metrics by adding them under service.pipelines.metrics . None You enable processors for metircs by adding them under service.pipelines.metrics . None You enable exporters for metrics by adding them under service.pipelines.metrics . None 3.1.2. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 3.2. Receivers Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Currently, the following General Availability and Technology Preview receivers are available for the Red Hat build of OpenTelemetry: OTLP Receiver Jaeger Receiver Host Metrics Receiver Kubernetes Objects Receiver Kubelet Stats Receiver Prometheus Receiver OTLP JSON File Receiver Zipkin Receiver Kafka Receiver Kubernetes Cluster Receiver OpenCensus Receiver Filelog Receiver Journald Receiver Kubernetes Events Receiver 3.2.1. OTLP Receiver The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP). The OTLP Receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with an enabled OTLP Receiver # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp] # ... 1 The OTLP gRPC endpoint. If omitted, the default 0.0.0.0:4317 is used. 2 The server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path to the TLS certificate at which the server verifies a client certificate. This sets the value of ClientCAs and ClientAuth to RequireAndVerifyClientCert in the TLSConfig . For more information, see the Config of the Golang TLS package . 4 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval field accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 5 The OTLP HTTP endpoint. The default value is 0.0.0.0:4318 . 6 The server-side TLS configuration. For more information, see the grpc protocol configuration section. 3.2.2. Jaeger Receiver The Jaeger Receiver ingests traces in the Jaeger formats. OpenTelemetry Collector custom resource with an enabled Jaeger Receiver # ... config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger] # ... 1 The Jaeger gRPC endpoint. If omitted, the default 0.0.0.0:14250 is used. 2 The Jaeger Thrift HTTP endpoint. If omitted, the default 0.0.0.0:14268 is used. 3 The Jaeger Thrift Compact endpoint. If omitted, the default 0.0.0.0:6831 is used. 4 The Jaeger Thrift Binary endpoint. If omitted, the default 0.0.0.0:6832 is used. 5 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.3. Host Metrics Receiver The Host Metrics Receiver ingests metrics in the OTLP format. OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> # ... --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics] # ... 1 Sets the time interval for host metrics collection. If omitted, the default value is 1m . 2 Sets the initial time delay for host metrics collection. If omitted, the default value is 1s . 3 Configures the root_path so that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the same root_path value for each instance. 4 Lists the enabled host metrics scrapers. Available scrapers are cpu , disk , load , filesystem , memory , network , paging , processes , and process . 3.2.4. Kubernetes Objects Receiver The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data. Important The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - "" resources: - events - pods verbs: - get - list - watch - apiGroups: - "events.k8s.io" resources: - events verbs: - watch - list # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug] # ... 1 The Resource name that this receiver observes: for example, pods , deployments , or events . 2 The observation mode that this receiver uses: pull or watch . 3 Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is 1h . 4 The label selector to define targets. 5 The field selector to filter targets. 6 The list of namespaces to collect events from. If omitted, the default value is all . 3.2.5. Kubelet Stats Receiver The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet's API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis. OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver # ... config: receivers: kubeletstats: collection_interval: 20s auth_type: "serviceAccount" endpoint: "https://USD{env:K8S_NODE_NAME}:10250" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName # ... 1 Sets the K8S_NODE_NAME to authenticate to the API. The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector. Permissions required by the service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [""] resources: ["nodes/proxy"] 1 verbs: ["get"] # ... 1 The permissions required when using the extra_metadata_labels or request_utilization or limit_utilization metrics. 3.2.6. Prometheus Receiver The Prometheus Receiver scrapes the metrics endpoints. Important The Prometheus Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Prometheus Receiver # ... config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus] # ... 1 Scrapes configurations using the Prometheus format. 2 The Prometheus job name. 3 The lnterval for scraping the metrics data. Accepts time units. The default value is 1m . 4 The targets at which the metrics are exposed. This example scrapes the metrics from a my-app application in the example project. 3.2.7. OTLP JSON File Receiver The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification. The receiver watches a specified directory for changes such as created or modified files to process. Important The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver # ... config: otlpjsonfile: include: - "/var/log/*.log" 1 exclude: - "/var/log/test.log" 2 # ... 1 The list of file path glob patterns to watch. 2 The list of file path glob patterns to ignore. 3.2.8. Zipkin Receiver The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats. OpenTelemetry Collector custom resource with the enabled Zipkin Receiver # ... config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin] # ... 1 The Zipkin HTTP endpoint. If omitted, the default 0.0.0.0:9411 is used. 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.9. Kafka Receiver The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format. Important The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Receiver # ... config: receivers: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The default is otlp_spans . 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.2.10. Kubernetes Cluster Receiver The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts. Important The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver # ... config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug] # ... This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account. ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol # ... RBAC rules for the ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default # ... 3.2.11. OpenCensus Receiver The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json. OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver # ... config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus] # ... 1 The OpenCensus endpoint. If omitted, the default is 0.0.0.0:55678 . 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3 You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with * are accepted under the cors_allowed_origins . To match any origin, enter only * . 3.2.12. Filelog Receiver The Filelog Receiver tails and parses logs from files. Important The Filelog Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Filelog Receiver that tails a text file # ... config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev # ... 1 A list of file glob patterns that match the file paths to be read. 2 An array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a desired format, chain the Operators together. 3.2.13. Journald Receiver The Journald Receiver parses journald events from the systemd journal and sends them as logs. Important The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Journald Receiver apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: "false" pod-security.kubernetes.io/enforce: "privileged" pod-security.kubernetes.io/audit: "privileged" pod-security.kubernetes.io/warn: "privileged" # ... --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule # ... 1 Filters output by message priorities or priority ranges. The default value is info . 2 Lists the units to read entries from. If empty, entries are read from all units. 3 Includes very long logs and logs with unprintable characters. The default value is false . 4 If set to true , the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value is false . 5 The time interval to wait after the first failure before retrying. The default value is 1s . The units are ms , s , m , h . 6 The upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is 30s . The supported units are ms , s , m , h . 7 The maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is 0 , retrying never stops. The default value is 5m . The supported units are ms , s , m , h . 3.2.14. Kubernetes Events Receiver The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs. Important The Kubernetes Events Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Kubernetes Events Receiver apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver # ... serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events] # ... 1 The service account of the Collector that has the required ClusterRole otel-collector RBAC. 2 The list of namespaces to collect events from. The default value is empty, which means that all namespaces are collected. 3.2.15. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.3. Processors Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. Currently, the following General Availability and Technology Preview processors are available for the Red Hat build of OpenTelemetry: Batch Processor Memory Limiter Processor Resource Detection Processor Attributes Processor Resource Processor Span Processor Kubernetes Attributes Processor Filter Processor Routing Processor Cumulative-to-Delta Processor Group-by-Attributes Processor Transform Processor 3.3.1. Batch Processor The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information. Example of the OpenTelemetry Collector custom resource when using the Batch Processor # ... config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.2. Parameters used by the Batch Processor Parameter Description Default timeout Sends the batch after a specific time duration and irrespective of the batch size. 200ms send_batch_size Sends the batch of telemetry data after the specified number of spans or metrics. 8192 send_batch_max_size The maximum allowable size of the batch. Must be equal or greater than the send_batch_size . 0 metadata_keys When activated, a batcher instance is created for each unique set of values found in the client.Metadata . [] metadata_cardinality_limit When the metadata_keys are populated, this configuration restricts the number of distinct metadata key-value combinations processed throughout the duration of the process. 1000 3.3.2. Memory Limiter Processor The Memory Limiter Processor periodically checks the Collector's memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run. Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor # ... config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.3. Parameters used by the Memory Limiter Processor Parameter Description Default check_interval Time between memory usage measurements. The optimal value is 1s . For spiky traffic patterns, you can decrease the check_interval or increase the spike_limit_mib . 0s limit_mib The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. 0 spike_limit_mib Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of limit_mib . To calculate the soft limit, subtract the spike_limit_mib from the limit_mib . 20% of limit_mib limit_percentage Same as the limit_mib but expressed as a percentage of the total available memory. The limit_mib setting takes precedence over this setting. 0 spike_limit_percentage Same as the spike_limit_mib but expressed as a percentage of the total available memory. Intended to be used with the limit_percentage setting. 0 3.3.3. Resource Detection Processor The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry's resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector. Important The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Resource Detection Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] # ... OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection] # ... OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector # ... config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false # ... 1 Specifies which detector to use. In this example, the environment detector is specified. 3.3.4. Attributes Processor The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions. Important The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported: Insert Inserts a new attribute into the input data when the specified key does not already exist. Update Updates an attribute in the input data if the key already exists. Upsert Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists. Delete Removes an attribute from the input data. Hash Hashes an existing attribute value as SHA1. Extract Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor's to_attributes setting with the existing attribute as the source. Convert Converts an existing attribute to a specified type. OpenTelemetry Collector using the Attributes Processor # ... config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int # ... 3.3.5. Resource Processor The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs. Important The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: attributes: - key: cloud.availability_zone value: "zone-1" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete # ... Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute. 3.3.6. Span Processor The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces. Span renaming requires specifying attributes for the new name by using the from_attributes configuration. Important The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Span Processor for renaming a span # ... config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2 # ... 1 Defines the keys to form the new span name. 2 An optional separator. You can use this processor to extract attributes from the span name. OpenTelemetry Collector using the Span Processor for extracting attributes from a span name # ... config: processors: span/to_attributes: name: to_attributes: rules: - ^\/api\/v1\/document\/(?P<documentId>.*)\/updateUSD 1 # ... 1 This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update , this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span. You can have the span status modified. OpenTelemetry Collector using the Span Processor for status change # ... config: processors: span/set_status: status: code: Error description: "<error_description>" # ... 3.3.7. Kubernetes Attributes Processor The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata. Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list'] # ... OpenTelemetry Collector using the Kubernetes Attributes Processor # ... config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME # ... 3.3.8. Filter Processor The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs. Important The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes["container.name"] == "app_container_1"' 2 - 'resource.attributes["host.name"] == "localhost"' 3 # ... 1 Defines the error mode. When set to ignore , ignores errors returned by conditions. When set to propagate , returns the error up the pipeline. An error causes the payload to be dropped from the Collector. 2 Filters the spans that have the container.name == app_container_1 attribute. 3 Filters the spans that have the host.name == localhost resource attribute. 3.3.9. Routing Processor The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value. Important The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250 # ... 1 The HTTP header name for the lookup value when performing the route. 2 The default exporter when the attribute value is not present in the table in the section. 3 The table that defines which values are to be routed to which exporters. Optionally, you can create an attribute_source configuration, which defines where to look for the attribute that you specify in the from_attribute field. The supported values are context for searching the context including the HTTP headers, and resource for searching the resource attributes. 3.3.10. Cumulative-to-Delta Processor The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics. You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching. This processor does not convert non-monotonic sums and exponential histograms. Important The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor # ... config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - "<regular_expression_for_metric_names>" # ... 1 Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics. 2 Defines a value provided in the metrics field as a strict exact match or regexp regular expression. 3 Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence. 4 Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics. 3.3.11. Group-by-Attributes Processor The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes. Important The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example: # ... config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2> # ... 1 Specifies attribute keys to group by. 2 If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. 3.3.12. Transform Processor The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL) . For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed. All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements. Important The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Configuration summary # ... config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string> # ... 1 Optional: See the following table "Values for the optional error_mode field". 2 Indicates a signal to be transformed. 3 See the following table "Values for the context field". 4 Optional: Conditions for performing a transformation. Configuration example # ... config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) 2 - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes["http.path"] == "/health" - set(name, attributes["http.route"]) - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}") - limit(attributes, 100, []) - truncate_all(attributes, 4096) # ... 1 Transforms a trace signal. 2 Keeps keys on the resources. 3 Replaces attributes and replaces string characters in password fields with asterisks. 4 Performs transformations at the span level. Table 3.4. Values for the context field Signal Statement Valid Contexts trace_statements resource , scope , span , spanevent metric_statements resource , scope , metric , datapoint log_statements resource , scope , log Table 3.5. Values for the optional error_mode field Value Description ignore Ignores and logs errors returned by statements and then continues to the statement. silent Ignores and doesn't log errors returned by statements and then continues to the statement. propagate Returns errors up the pipeline and drops the payload. Implicit default. 3.3.13. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.4. Exporters Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry: OTLP Exporter OTLP HTTP Exporter Debug Exporter Load Balancing Exporter Prometheus Exporter Prometheus Remote Write Exporter Kafka Exporter AWS CloudWatch Exporter AWS EMF Exporter AWS X-Ray Exporter File Exporter 3.4.1. OTLP Exporter The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: "dev" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp] # ... 1 The OTLP gRPC endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client-side TLS configuration. Defines paths to TLS certificates. 3 Disables client transport security when set to true . The default value is false by default. 4 Skips verifying the certificate when set to true . The default value is false . 5 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 6 Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing. 7 Headers are sent for every request performed during an established connection. 3.4.2. OTLP HTTP Exporter The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: "dev" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp] # ... 1 The OTLP HTTP endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client side TLS configuration. Defines paths to TLS certificates. 3 Headers are sent in every HTTP request. 4 If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request. 3.4.3. Debug Exporter The Debug Exporter prints traces and metrics to the standard output. OpenTelemetry Collector custom resource with the enabled Debug Exporter # ... config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] # ... 1 Verbosity of the debug export: detailed , normal , or basic . When set to detailed , pipeline data are verbosely logged. Defaults to normal . 2 Initial number of messages logged per second. The default value is 2 messages per second. 3 Sampling rate after the initial number of messages, the value in sampling_initial , has been logged. Disabled by default with the default 1 value. Sampling is enabled with values greater than 1 . For more information, see the page for the sampler function in the zapcore package on the Go Project's website. 4 When set to true , enables output from the Collector's internal logger for the exporter. 3.4.4. Load Balancing Exporter The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration. Important The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter # ... config: exporters: loadbalancing: routing_key: "service" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317 # ... 1 The routing_key: service exports spans for the same service name to the same Collector instance to provide accurate aggregation. The routing_key: traceID exports spans based on their traceID . The implicit default is traceID based routing. 2 The OTLP is the only supported load-balancing protocol. All options of the OTLP exporter are supported. 3 You can configure only one resolver. 4 The static resolver distributes the load across the listed endpoints. 5 You can use the DNS resolver only with a Kubernetes headless service. 6 The Kubernetes resolver is recommended. 3.4.5. Prometheus Exporter The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats. Important The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Exporter # ... config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus] # ... 1 The network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the endpoint field to the <instance_name>-collector service. 2 The server-side TLS configuration. Defines paths to TLS certificates. 3 If set, exports metrics under the provided value. 4 Key-value pair labels that are applied for every exported metric. 5 If true , metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such as counter . Disabled by default. 6 If enabled is true , all the resource attributes are converted to metric labels. Disabled by default. 7 Defines how long metrics are exposed without updates. The default is 5m . 8 Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is true . Note When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true , the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics. 3.4.6. Prometheus Remote Write Exporter The Prometheus Remote Write Exporter exports metrics to compatible back ends. Important The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter # ... config: exporters: prometheusremotewrite: endpoint: "https://my-prometheus:7900/api/v1/push" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite] # ... 1 Endpoint for sending the metrics. 2 Server-side TLS configuration. Defines paths to TLS certificates. 3 When set to true , creates a target_info metric for each resource metric. 4 When set to true , exports a _created metric for the Summary, Histogram, and Monotonic Sum metric points. 5 Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is 3000000 , which is approximately 2.861 megabytes. Warning This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics. You must enable the --web.enable-remote-write-receiver feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails. 3.4.7. Kafka Exporter The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency. Important The Kafka Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Exporter # ... config: exporters: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The following are the defaults: otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs. 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.4.8. AWS CloudWatch Logs Exporter The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter # ... config: exporters: awscloudwatchlogs: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5 # ... 1 Required. If the log group does not exist yet, it is automatically created. 2 Required. If the log stream does not exist yet, it is automatically created. 3 Optional. If the AWS region is not already set in the default credential chain, you must specify it. 4 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 5 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . Additional resources What is Amazon CloudWatch Logs? (Amazon CloudWatch Logs User Guide) Specifying Credentials (AWS SDK for Go Developer Guide) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.9. AWS EMF Exporter The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF): Int64DataPoints DoubleDataPoints SummaryDataPoints The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents API. One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/ . Important The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter # ... config: exporters: awsemf: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7 # ... 1 Customized log group name. 2 Customized log stream name. 3 Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default. 4 The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region. 5 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 6 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . 7 Optional. A custom namespace for the Amazon CloudWatch metrics. Log group name The log_group_name parameter allows you to customize the log group name and supports the default /metrics/default value or the following placeholders: /aws/metrics/{ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute in the metrics data and replace it with the actual cluster name. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Log stream name The log_stream_name parameter allows you to customize the log stream name and supports the default otel-stream value or the following placeholders: {ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute. {ContainerInstanceId} This placeholder is used to search for the ContainerInstanceId or aws.ecs.container.instance.id resource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskDefinitionFamily} This placeholder is used to search for the TaskDefinitionFamily or aws.ecs.task.family resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute in the metrics data and replace it with the actual task ID. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Additional resources Specification: Embedded metric format (Amazon CloudWatch User Guide) PutLogEvents (Amazon CloudWatch Logs API Reference) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.10. AWS X-Ray Exporter The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments API and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter # ... config: exporters: awsxray: region: "<region>" 1 endpoint: <endpoint> 2 resource_arn: "<aws_resource_arn>" 3 role_arn: "<iam_role>" 4 indexed_attributes: [ "<indexed_attr_0>", "<indexed_attr_1>" ] 5 aws_log_groups: ["<group1>", "<group2>"] 6 request_timeout_seconds: 120 7 # ... 1 The destination region for the X-Ray segments sent to the AWS X-Ray service. For example, eu-west-1 . 2 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 3 The Amazon Resource Name (ARN) of the AWS resource that is running the Collector. 4 The AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account. 5 The list of attribute names to be converted to X-Ray annotations. 6 The list of log group names for Amazon CloudWatch Logs. 7 Time duration in seconds before timing out a request. If omitted, the default value is 30 . Additional resources What is AWS X-Ray? (AWS X-Ray Developer Guide) AWS SDK for Go API Reference (AWS Documentation) Specifying Credentials (AWS SDK for Go Developer Guide) IAM roles (AWS Identity and Access Management User Guide) 3.4.11. File Exporter The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files. With this exporter, you can also use a resource attribute to control file naming. The only required setting is path , which specifies the destination path for telemetry files in the persistent-volume file system. Important The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled File Exporter # ... config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9 # ... 1 The file-system path where the data is to be written. There is no default. 2 File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the rotation setting to enable file rotation. 3 The max_megabytes setting is the maximum size a file is allowed to reach until it is rotated. The default is 100 . 4 The max_days setting is for how many days a file is to be retained, counting from the timestamp in the file name. There is no default. 5 The max_backups setting is for retaining several older files. The defalt is 100 . 6 The localtime setting specifies the local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC). 7 The format for encoding the telemetry data before writing it to a file. The default format is json . The proto format is also supported. 8 File compression is optional and not set by default. This setting defines the compression algorithm for the data that is exported to a file. Currently, only the zstd compression algorithm is supported. There is no default. 9 The time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the rotation settings. 3.4.12. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.5. Connectors A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. Currently, the following General Availability and Technology Preview connectors are available for the Red Hat build of OpenTelemetry: Count Connector Routing Connector Forward Connector Spanmetrics Connector 3.5.1. Count Connector The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines. Important The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following are the default metric names: trace.span.count trace.span.event.count metric.count metric.datapoint.count log.record.count You can also expose custom metric names. OpenTelemetry Collector custom resource (CR) with an enabled Count Connector # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus] # ... 1 It is important to correctly configure the Count Connector as an exporter or receiver in the pipeline and to export the generated metrics to the correct exporter. 2 The Count Connector is configured to receive spans as an exporter. 3 The Count Connector is configured to emit generated metrics as a receiver. Tip If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data. The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. Example OpenTelemetry Collector CR for the Count Connector to count spans by conditions # ... config: connectors: count: spans: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" conditions: - 'attributes["env"] == "dev"' - 'name == "devevent"' # ... 1 In this example, the exposed metric counts spans with the specified conditions. 2 You can specify a custom metric name such as cluster.prod.event.count . Tip Write conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors. The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. The attribute keys are injected into the telemetry data. You must define a value for the default_value field for missing attributes. Example OpenTelemetry Collector CR for the Count Connector to count logs by attributes # ... config: connectors: count: logs: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" attributes: - key: env default_value: unknown 3 # ... 1 Specifies attributes for logs. 2 You can specify a custom metric name such as my.log.count . 3 Defines a default value when the attribute is not set. 3.5.2. Routing Connector The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements. Important The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Routing Connector # ... config: connectors: routing: table: 1 - statement: route() where attributes["X-Tenant"] == "dev" 2 pipelines: [traces/dev] 3 - statement: route() where attributes["X-Tenant"] == "prod" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod] # ... 1 Connector routing table. 2 Routing conditions written as OTTL statements. 3 Destination pipelines for routing the matching telemetry data. 4 Destination pipelines for routing the telemetry data for which no routing condition is satisfied. 5 Error-handling mode: The propagate value is for logging an error and dropping the payload. The ignore value is for ignoring the condition and attempting to match with the one. The silent value is the same as ignore but without logging the error. The default is propagate . 6 When set to true , the payload is routed only to the first pipeline whose routing condition is met. The default is false . 3.5.3. Forward Connector The Forward Connector merges two pipelines of the same type. Important The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Forward Connector # ... config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp] # ... 3.5.4. Spanmetrics Connector The Spanmetrics Connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data. OpenTelemetry Collector custom resource with an enabled Spanmetrics Connector # ... config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics] # ... 1 Defines the flush interval of the generated metrics. Defaults to 15s . 3.5.5. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.6. Extensions Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. Currently, the following General Availability and Technology Preview extensions are available for the Red Hat build of OpenTelemetry: BearerTokenAuth Extension OAuth2Client Extension File Storage Extension OIDC Auth Extension Jaeger Remote Sampling Extension Performance Profiler Extension Health Check Extension zPages Extension 3.6.1. BearerTokenAuth Extension The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs. OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension # ... config: extensions: bearertokenauth: scheme: "Bearer" 1 token: "<token>" 2 filename: "<token_file>" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 You can configure the BearerTokenAuth Extension to send a custom scheme . The default is Bearer . 2 You can add the BearerTokenAuth Extension token as metadata to identify a message. 3 Path to a file that contains an authorization token that is transmitted with every message. 4 You can assign the authenticator configuration to an OTLP Receiver. 5 You can assign the authenticator configuration to an OTLP Exporter. 3.6.2. OAuth2Client Extension The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs. Important The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension # ... config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: ["api.metrics"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Client identifier, which is provided by the identity provider. 2 Confidential key used to authenticate the client to the identity provider. 3 Further metadata, in the key-value pair format, which is transferred during authentication. For example, audience specifies the intended audience for the access token, indicating the recipient of the token. 4 The URL of the OAuth2 token endpoint, where the Collector requests access tokens. 5 The scopes define the specific permissions or access levels requested by the client. 6 The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens. 7 When set to true , configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. 8 The path to a Certificate Authority (CA) file that is used to verify the server's certificate during the TLS handshake. 9 The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required. 10 The path to the client's private key file that is used with the client certificate if needed for authentication. 11 Sets a timeout for the token client's request. 12 You can assign the authenticator configuration to an OTLP exporter. 3.6.3. File Storage Extension The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols. This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist. Important The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue # ... config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Specifies the directory in which the telemetry data is stored. 2 Specifies the timeout time interval for opening the stored files. 3 Starts compaction when the Collector starts. If omitted, the default is false . 4 Specifies the directory in which the compactor stores the telemetry data. 5 Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is 65536 bytes. 6 When set, forces the database to perform an fsync call after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance. 7 Buffers the OTLP Exporter data on the local file system. 8 Starts the File Storage Extension by the Collector. 3.6.4. OIDC Auth Extension The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request. Important The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured OIDC Auth Extension # ... config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The name of the header that contains the ID token. The default name is authorization . 2 The base URL of the OIDC provider. 3 Optional: The path to the issuer's CA certificate. 4 The audience for the token. 5 The name of the claim that contains the username. The default name is sub . 3.6.5. Jaeger Remote Sampling Extension The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger's remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system. Important The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension # ... config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The time interval at which the sampling configuration is updated. 2 The endpoint for reaching the Jaeger remote sampling strategy provider. 3 The path to a local file that contains a sampling strategy configuration in the JSON format. Example of a Jaeger Remote Sampling strategy file { "service_strategies": [ { "service": "foo", "type": "probabilistic", "param": 0.8, "operation_strategies": [ { "operation": "op1", "type": "probabilistic", "param": 0.2 }, { "operation": "op2", "type": "probabilistic", "param": 0.4 } ] }, { "service": "bar", "type": "ratelimiting", "param": 5 } ], "default_strategy": { "type": "probabilistic", "param": 0.5, "operation_strategies": [ { "operation": "/health", "type": "probabilistic", "param": 0.0 }, { "operation": "/metrics", "type": "probabilistic", "param": 0.0 } ] } } 3.6.6. Performance Profiler Extension The Performance Profiler Extension enables the Go net/http/pprof endpoint. Developers use this extension to collect performance profiles and investigate issues with the service. Important The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Performance Profiler Extension # ... config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The endpoint at which this extension listens. Use localhost: to make it available only locally or ":" to make it available on all network interfaces. The default value is localhost:1777 . 2 Sets a fraction of blocking events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 3 Set a fraction of mutex contention events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 4 The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated. 3.6.7. Health Check Extension The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift. Important The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Health Check Extension # ... config: extensions: health_check: endpoint: "0.0.0.0:13133" 1 tls: 2 ca_file: "/path/to/ca.crt" cert_file: "/path/to/cert.crt" key_file: "/path/to/key.key" path: "/health/status" 3 check_collector_pipeline: 4 enabled: true 5 interval: "5m" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The target IP address for publishing the health check status. The default is 0.0.0.0:13133 . 2 The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path for the health check server. The default is / . 4 Settings for the Collector pipeline health check. 5 Enables the Collector pipeline health check. The default is false . 6 The time interval for checking the number of failures. The default is 5m . 7 The threshold of multiple failures until which a container is still marked as healthy. The default is 5 . 3.6.8. zPages Extension The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time. You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint. Important The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured zPages Extension # ... config: extensions: zpages: endpoint: "localhost:55679" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 Specifies the HTTP endpoint for serving the zPages extension. The default is localhost:55679 . Important Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route. You can enable port-forwarding by running the following oc command: USD oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679 The Collector provides the following zPages for diagnostics: ServiceZ Shows an overview of the Collector services and links to the following zPages: PipelineZ , ExtensionZ , and FeatureZ . This page also displays information about the build version and runtime. An example of this page's URL is http://localhost:55679/debug/servicez . PipelineZ Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page's URL is http://localhost:55679/debug/pipelinez . ExtensionZ Shows the currently active extensions in the Collector. An example of this page's URL is http://localhost:55679/debug/extensionz . FeatureZ Shows the feature gates enabled in the Collector along with their status and description. An example of this page's URL is http://localhost:55679/debug/featurez . TraceZ Shows spans categorized by latency. Available time ranges include 0 ms, 10 ms, 100 ms, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page's URL is http://localhost:55679/debug/tracez . 3.6.9. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.7. Target Allocator The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service. Important The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example OpenTelemetryCollector CR with the enabled Target Allocator apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] # ... 1 When the Target Allocator is enabled, the deployment mode must be set to statefulset . 2 Enables the Target Allocator. Defaults to false . 3 The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the ServiceMonitor , PodMonitor custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is <collector_name>-targetallocator . 4 Enables integration with the Prometheus PodMonitor and ServiceMonitor custom resources. 5 Label selector for the Prometheus ServiceMonitor custom resources. When left empty, enables all service monitors. 6 Label selector for the Prometheus PodMonitor custom resources. When left empty, enables all pod monitors. 7 Prometheus receiver with the minimal, empty scrape_config: [] configuration option. The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration. RBAC configuration for the Target Allocator service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [""] resources: - services - pods - namespaces verbs: ["get", "list", "watch"] - apiGroups: ["monitoring.coreos.com"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: ["get", "list", "watch"] - apiGroups: ["discovery.k8s.io"] resources: - endpointslices verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2 # ... 1 The name of the Target Allocator service account mane. 2 The namespace of the Target Allocator service account. | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]",
"receivers:",
"processors:",
"exporters:",
"connectors:",
"extensions:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"service: pipelines: metrics: receivers:",
"service: pipelines: metrics: processors:",
"service: pipelines: metrics: exporters:",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]",
"config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]",
"config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]",
"config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]",
"config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2",
"config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]",
"config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]",
"config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default",
"config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]",
"config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev",
"apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]",
"config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]",
"config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false",
"config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int",
"config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete",
"config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2",
"config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1",
"config: processors: span/set_status: status: code: Error description: \"<error_description>\"",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']",
"config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME",
"config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3",
"config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250",
"config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"",
"config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>",
"config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>",
"config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)",
"config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]",
"config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]",
"config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]",
"config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317",
"config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]",
"config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]",
"config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]",
"config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5",
"config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7",
"config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7",
"config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]",
"config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'",
"config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3",
"config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]",
"config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]",
"config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]",
"config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]",
"{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }",
"config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]",
"oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/configuring-the-collector |
Chapter 22. Clusters at the network far edge | Chapter 22. Clusters at the network far edge 22.1. Challenges of the network far edge Edge computing presents complex challenges when managing many sites in geographically displaced locations. Use GitOps Zero Touch Provisioning (ZTP) to provision and manage sites at the far edge of the network. 22.1.1. Overcoming the challenges of the network far edge Today, service providers want to deploy their infrastructure at the edge of the network. This presents significant challenges: How do you handle deployments of many edge sites in parallel? What happens when you need to deploy sites in disconnected environments? How do you manage the lifecycle of large fleets of clusters? GitOps Zero Touch Provisioning (ZTP) and GitOps meets these challenges by allowing you to provision remote edge sites at scale with declarative site definitions and configurations for bare-metal equipment. Template or overlay configurations install OpenShift Container Platform features that are required for CNF workloads. The full lifecycle of installation and upgrades is handled through the GitOps ZTP pipeline. GitOps ZTP uses GitOps for infrastructure deployments. With GitOps, you use declarative YAML files and other defined patterns stored in Git repositories. Red Hat Advanced Cluster Management (RHACM) uses your Git repositories to drive the deployment of your infrastructure. GitOps provides traceability, role-based access control (RBAC), and a single source of truth for the desired state of each site. Scalability issues are addressed by Git methodologies and event driven operations through webhooks. You start the GitOps ZTP workflow by creating declarative site definition and configuration custom resources (CRs) that the GitOps ZTP pipeline delivers to the edge nodes. The following diagram shows how GitOps ZTP works within the far edge framework. 22.1.2. Using GitOps ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management (RHACM) manages clusters in a hub-and-spoke architecture, where a single hub cluster manages many spoke clusters. Hub clusters running RHACM provision and deploy the managed clusters by using GitOps Zero Touch Provisioning (ZTP) and the assisted service that is deployed when you install RHACM. The assisted service handles provisioning of OpenShift Container Platform on single node clusters, three-node clusters, or standard clusters running on bare metal. A high-level overview of using GitOps ZTP to provision and maintain bare-metal hosts with OpenShift Container Platform is as follows: A hub cluster running RHACM manages an OpenShift image registry that mirrors the OpenShift Container Platform release images. RHACM uses the OpenShift image registry to provision the managed clusters. You manage the bare-metal hosts in a YAML format inventory file, versioned in a Git repository. You make the hosts ready for provisioning as managed clusters, and use RHACM and the assisted service to install the bare-metal hosts on site. Installing and deploying the clusters is a two-stage process, involving an initial installation phase, and a subsequent configuration and deployment phase. The following diagram illustrates this workflow: 22.1.3. Installing managed clusters with SiteConfig resources and RHACM GitOps Zero Touch Provisioning (ZTP) uses SiteConfig custom resources (CRs) in a Git repository to manage the processes that install OpenShift Container Platform clusters. The SiteConfig CR contains cluster-specific parameters required for installation. It has options for applying select configuration CRs during installation including user defined extra manifests. The GitOps ZTP plugin processes SiteConfig CRs to generate a collection of CRs on the hub cluster. This triggers the assisted service in Red Hat Advanced Cluster Management (RHACM) to install OpenShift Container Platform on the bare-metal host. You can find installation status and error messages in these CRs on the hub cluster. You can provision single clusters manually or in batches with GitOps ZTP: Provisioning a single cluster Create a single SiteConfig CR and related installation and configuration CRs for the cluster, and apply them in the hub cluster to begin cluster provisioning. This is a good way to test your CRs before deploying on a larger scale. Provisioning many clusters Install managed clusters in batches of up to 400 by defining SiteConfig and related CRs in a Git repository. ArgoCD uses the SiteConfig CRs to deploy the sites. The RHACM policy generator creates the manifests and applies them to the hub cluster. This starts the cluster provisioning process. 22.1.4. Configuring managed clusters with policies and PolicyGenTemplate resources GitOps Zero Touch Provisioning (ZTP) uses Red Hat Advanced Cluster Management (RHACM) to configure clusters by using a policy-based governance approach to applying the configuration. The policy generator or PolicyGen is a plugin for the GitOps Operator that enables the creation of RHACM policies from a concise template. The tool can combine multiple CRs into a single policy, and you can generate multiple policies that apply to various subsets of clusters in your fleet. Note For scalability and to reduce the complexity of managing configurations across the fleet of clusters, use configuration CRs with as much commonality as possible. Where possible, apply configuration CRs using a fleet-wide common policy. The preference is to create logical groupings of clusters to manage as much of the remaining configurations as possible under a group policy. When a configuration is unique to an individual site, use RHACM templating on the hub cluster to inject the site-specific data into a common or group policy. Alternatively, apply an individual site policy for the site. The following diagram shows how the policy generator interacts with GitOps and RHACM in the configuration phase of cluster deployment. For large fleets of clusters, it is typical for there to be a high-level of consistency in the configuration of those clusters. The following recommended structuring of policies combines configuration CRs to meet several goals: Describe common configurations once and apply to the fleet. Minimize the number of maintained and managed policies. Support flexibility in common configurations for cluster variants. Table 22.1. Recommended PolicyGenTemplate policy categories Policy category Description Common A policy that exists in the common category is applied to all clusters in the fleet. Use common PolicyGenTemplate CRs to apply common installation settings across all cluster types. Groups A policy that exists in the groups category is applied to a group of clusters in the fleet. Use group PolicyGenTemplate CRs to manage specific aspects of single-node, three-node, and standard cluster installations. Cluster groups can also follow geographic region, hardware variant, etc. Sites A policy that exists in the sites category is applied to a specific cluster site. Any cluster can have its own specific policies maintained. Additional resources For more information about extracting the reference SiteConfig and PolicyGenTemplate CRs from the ztp-site-generate container image, see Preparing the ZTP Git repository . 22.2. Preparing the hub cluster for ZTP To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts. 22.2.1. Telco RAN DU 4.14 validated software components The Red Hat telco RAN DU 4.14 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters and hub clusters. Table 22.2. Telco RAN DU managed cluster validated software components Component Software version Managed cluster version 4.14 Cluster Logging Operator 5.7 Local Storage Operator 4.14 PTP Operator 4.14 SRIOV Operator 4.14 Node Tuning Operator 4.14 Logging Operator 4.14 SRIOV-FEC Operator 2.7 Table 22.3. Hub cluster validated software components Component Software version Hub cluster version 4.14 GitOps ZTP plugin 4.14 Red Hat Advanced Cluster Management (RHACM) 2.9, 2.10 Red Hat OpenShift GitOps 1.9, 1.10 Topology Aware Lifecycle Manager (TALM) 4.14 22.2.2. Recommended hub cluster specifications and managed cluster limits for GitOps ZTP With GitOps Zero Touch Provisioning (ZTP), you can manage thousands of clusters in geographically dispersed regions and networks. The Red Hat Performance and Scale lab successfully created and managed 3500 virtual single-node OpenShift clusters with a reduced DU profile from a single Red Hat Advanced Cluster Management (RHACM) hub cluster in a lab environment. In real-world situations, the scaling limits for the number of clusters that you can manage will vary depending on various factors affecting the hub cluster. For example: Hub cluster resources Available hub cluster host resources (CPU, memory, storage) are an important factor in determining how many clusters the hub cluster can manage. The more resources allocated to the hub cluster, the more managed clusters it can accommodate. Hub cluster storage The hub cluster host storage IOPS rating and whether the hub cluster hosts use NVMe storage can affect hub cluster performance and the number of clusters it can manage. Network bandwidth and latency Slow or high-latency network connections between the hub cluster and managed clusters can impact how the hub cluster manages multiple clusters. Managed cluster size and complexity The size and complexity of the managed clusters also affects the capacity of the hub cluster. Larger managed clusters with more nodes, namespaces, and resources require additional processing and management resources. Similarly, clusters with complex configurations such as the RAN DU profile or diverse workloads can require more resources from the hub cluster. Number of managed policies The number of policies managed by the hub cluster scaled over the number of managed clusters bound to those policies is an important factor that determines how many clusters can be managed. Monitoring and management workloads RHACM continuously monitors and manages the managed clusters. The number and complexity of monitoring and management workloads running on the hub cluster can affect its capacity. Intensive monitoring or frequent reconciliation operations can require additional resources, potentially limiting the number of manageable clusters. RHACM version and configuration Different versions of RHACM can have varying performance characteristics and resource requirements. Additionally, the configuration settings of RHACM, such as the number of concurrent reconciliations or the frequency of health checks, can affect the managed cluster capacity of the hub cluster. Use the following representative configuration and network specifications to develop your own Hub cluster and network specifications. Important The following guidelines are based on internal lab benchmark testing only and do not represent complete bare-metal host specifications. Table 22.4. Representative three-node hub cluster machine specifications Requirement Description Server hardware 3 x Dell PowerEdge R650 rack servers NVMe hard disks 50 GB disk for /var/lib/etcd 2.9 TB disk for /var/lib/containers SSD hard disks 1 SSD split into 15 200GB thin-provisioned logical volumes provisioned as PV CRs 1 SSD serving as an extra large PV resource Number of applied DU profile policies 5 Important The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing. Table 22.5. Simulated lab environment network specifications Specification Description Round-trip time (RTT) latency 50 ms Packet loss 0.02% packet loss Network bandwidth limit 20 Mbps Additional resources Creating and managing single-node OpenShift clusters with RHACM 22.2.3. Installing GitOps ZTP in a disconnected environment Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured a disconnected mirror registry for use in the cluster. Note The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry. Procedure Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment . Install GitOps and TALM in the hub cluster. Additional resources Installing OpenShift GitOps Installing TALM Mirroring an Operator catalog 22.2.4. Adding RHCOS ISO and RootFS images to the disconnected mirror host Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images. Prerequisites Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type. Procedure Log in to the mirror host. Obtain the RHCOS ISO and RootFS images from mirror.openshift.com , for example: Export the required image names and OpenShift Container Platform version as environment variables: USD export ISO_IMAGE_NAME=<iso_image_name> 1 USD export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1 USD export OCP_VERSION=<ocp_version> 1 1 ISO image name, for example, rhcos-4.14.1-x86_64-live.x86_64.iso 1 RootFS image name, for example, rhcos-4.14.1-x86_64-live-rootfs.x86_64.img 1 OpenShift Container Platform version, for example, 4.14.1 Download the required images: USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME} USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME} Verification steps Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example: USD wget http://USD(hostname)/USD{ISO_IMAGE_NAME} Example output Saving to: rhcos-4.14.1-x86_64-live.x86_64.iso rhcos-4.14.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s Additional resources Creating a mirror registry Mirroring images for a disconnected installation 22.2.5. Enabling the assisted service Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning resource to watch all namespaces and to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have RHACM with MultiClusterHub enabled. Procedure Enable the Provisioning resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the central infrastructure management service . Update the AgentServiceConfig CR by running the following command: USD oc edit AgentServiceConfig Add the following entry to the items.spec.osImages field in the CR: - cpuArchitecture: x86_64 openshiftVersion: "4.14" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso where: <host> Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server. <path> Is the path to the image on the target mirror registry. Save and quit the editor to apply the changes. 22.2.6. Configuring the hub cluster to use a disconnected mirror registry You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment. Prerequisites You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.8 installed. You have hosted the rootfs and iso images on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository . Warning If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Procedure Create a ConfigMap containing the mirror registry config: apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "quay.io/example-repository" 4 mirror-by-digest-only = true [[registry.mirror]] location = "mirror1.registry.corp.com:5000/example-repository" 5 1 The ConfigMap namespace must be set to multicluster-engine . 2 The mirror registry's certificate that is used when creating the mirror registry. 3 The configuration file for the mirror registry. The mirror registry configuration adds mirror information to the /etc/containers/registries.conf file in the discovery image. The mirror information is stored in the imageContentSources section of the install-config.yaml file when the information is passed to the installation program. The Assisted Service pod that runs on the hub cluster fetches the container images from the configured mirror registry. 4 The URL of the mirror registry. You must use the URL from the imageContentSources section by running the oc adm release mirror command when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section. 5 The registries defined in the registries.conf file must be scoped by repository, not by registry. In this example, both the quay.io/example-repository and the mirror1.registry.corp.com:5000/example-repository repositories are scoped by the example-repository repository. This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below: Example output apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> 3 url: <iso_url> 4 1 Set the AgentServiceConfig namespace to multicluster-engine to match the ConfigMap namespace. 2 Set mirrorRegistryRef.name to match the definition specified in the related ConfigMap CR. 3 Set the OpenShift Container Platform version to either the x.y or x.y.z format. 4 Set the URL for the ISO hosted on the httpd server. Important A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network. Additional resources Mirroring the OpenShift Container Platform image repository 22.2.7. Configuring the hub cluster to use unauthenticated registries You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images. Prerequisites You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster. You have installed the OpenShift Container Platform CLI (oc). You have logged in as a user with cluster-admin privileges. You have configured an unauthenticated registry for use with the hub cluster. Procedure Update the AgentServiceConfig custom resource (CR) by running the following command: USD oc edit AgentServiceConfig agent Add the unauthenticatedRegistries field in the CR: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com ... Unauthenticated registries are listed under spec.unauthenticatedRegistries in the AgentServiceConfig resource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation. assisted-service validates the pull secret by making sure it contains the authentication information for every image registry used for installation. Note Mirror registries are automatically added to the ignore list and do not need to be added under spec.unauthenticatedRegistries . Specifying the PUBLIC_CONTAINER_REGISTRIES environment variable in the ConfigMap overrides the default values with the specified value. The PUBLIC_CONTAINER_REGISTRIES defaults are quay.io and registry.svc.ci.openshift.org . Verification Verify that you can access the newly added registry from the hub cluster by running the following commands: Open a debug shell prompt to the hub cluster: USD oc debug node/<node_name> Test access to the unauthenticated registry by running the following command: sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry> where: <unauthenticated_registry> Is the new registry, for example, unauthenticated-image-registry.openshift-image-registry.svc:5000 . Example output Login Succeeded! 22.2.8. Configuring the hub cluster with ArgoCD You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps Zero Touch Provisioning (ZTP). Note Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs. Prerequisites You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed. You have extracted the reference deployment from the GitOps ZTP plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the out/argocd/deployment directory referenced in the following procedure. Procedure Prepare the ArgoCD pipeline configuration: Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository". Configure access to the repository using the ArgoCD UI. Under Settings configure the following: Repositories - Add the connection information. The URL must end in .git , for example, https://repo.example.com/repo.git and credentials. Certificates - Add the public certificate for the repository, if needed. Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml and out/argocd/deployment/policies-app.yaml , based on your Git repository: Update the URL to point to the Git repository. The URL ends with .git , for example, https://repo.example.com/repo.git . The targetRevision indicates which Git repository branch to monitor. path specifies the path to the SiteConfig and PolicyGenTemplate CRs, respectively. To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment. Select the multicluster-operators-subscription image that matches your RHACM version. For RHACM 2.8 and 2.9, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version> image. For RHACM 2.10 and later, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version> image. Important The version of the multicluster-operators-subscription image must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images. Click [Expand for Operator list] in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform. Add the following configuration to the out/argocd/deployment/argocd-openshift-gitops-patch.json file: { "args": [ "-c", "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1 ], "command": [ "/bin/bash" ], "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3 "name": "policy-generator-install", "imagePullPolicy": "Always", "volumeMounts": [ { "mountPath": "/.config", "name": "kustomize" } ] } 1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. 2 Match the multicluster-operators-subscription image to the RHACM version. 3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. Patch the ArgoCD instance. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by running the following command: USD oc apply -k out/argocd/deployment Optional: If you have existing ArgoCD applications, verify that the PrunePropagationPolicy=background policy is set in the Application resource by running the following command: USD oc -n openshift-gitops get applications.argoproj.io \ clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq Example output for an existing policy [ "CreateNamespace=true", "PrunePropagationPolicy=background", "RespectIgnoreDifferences=true" ] If the spec.syncPolicy.syncOption field does not contain a PrunePropagationPolicy parameter or PrunePropagationPolicy is set to the foreground value, set the policy to background in the Application resource. See the following example: kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background Setting the background deletion policy ensures that the ManagedCluster CR and all its associated resources are deleted. 22.2.9. Preparing the GitOps ZTP site configuration repository Before you can use the GitOps Zero Touch Provisioning (ZTP) pipeline, you need to prepare the Git repository to host the site configuration data. Prerequisites You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs). You have deployed the managed clusters using GitOps ZTP. Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs. Note Keep SiteConfig and PolicyGenTemplate CRs in separate directories. Both the SiteConfig and PolicyGenTemplate directories must contain a kustomization.yaml file that explicitly includes the files in that directory. Export the argocd directory from the ztp-site-generate container image using the following commands: USD podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 USD mkdir -p ./out USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out Check that the out directory contains the following subdirectories: out/extra-manifest contains the source CR files that SiteConfig uses to generate extra manifest configMap . out/source-crs contains the source CR files that PolicyGenTemplate uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. out/argocd/deployment contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. out/argocd/example contains the examples for SiteConfig and PolicyGenTemplate files that represent the recommended configuration. Copy the out/source-crs folder and contents to the PolicyGentemplate directory. The out/extra-manifests directory contains the reference manifests for a RAN DU cluster. Copy the out/extra-manifests directory into the SiteConfig folder. This directory should contain CRs from the ztp-site-generate container only. Do not add user-provided CRs here. If you want to work with user-provided CRs you must create another directory for that content. For example: example/ ├── policygentemplates │ ├── kustomization.yaml │ └── source-crs/ └── siteconfig ├── extra-manifests └── kustomization.yaml Commit the directory structure and the kustomization.yaml files and push to your Git repository. The initial push to Git should include the kustomization.yaml files. You can use the directory structure under out/argocd/example as a reference for the structure and content of your Git repository. That structure includes SiteConfig and PolicyGenTemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using. For all cluster types, you must: Add the source-crs subdirectory to the policygentemplate directory. Add the extra-manifests directory to the siteconfig directory. The following example describes a set of CRs for a network of single-node clusters: example/ ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ ├── source-crs/ │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── extra-manifests/ 1 ├── custom-manifests/ 2 ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml 1 Contains reference manifests from the ztp-container . 2 Contains custom manifests. 22.2.9.1. Preparing the GitOps ZTP site configuration repository for version independence You can use GitOps ZTP to manage source custom resources (CRs) for managed clusters that are running different versions of OpenShift Container Platform. This means that the version of OpenShift Container Platform running on the hub cluster can be independent of the version running on the managed clusters. Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs. Within the PolicyGenTemplate directory, create a directory for each OpenShift Container Platform version you want to make available. For each version, create the following resources: kustomization.yaml file that explicitly includes the files in that directory source-crs directory to contain reference CR configuration files from the ztp-site-generate container If you want to work with user-provided CRs, you must create a separate directory for them. In the /siteconfig directory, create a subdirectory for each OpenShift Container Platform version you want to make available. For each version, create at least one directory for reference CRs to be copied from the container. There is no restriction on the naming of directories or on the number of reference directories. If you want to work with custom manifests, you must create a separate directory for them. The following example describes a structure using user-provided manifests and CRs for different versions of OpenShift Container Platform: ├── policygentemplates │ ├── kustomization.yaml 1 │ ├── version_4.13 2 │ │ ├── common-ranGen.yaml │ │ ├── group-du-sno-ranGen.yaml │ │ ├── group-du-sno-validator-ranGen.yaml │ │ ├── helix56-v413.yaml │ │ ├── kustomization.yaml 3 │ │ ├── ns.yaml │ │ └── source-crs/ 4 │ │ └── reference-crs/ 5 │ │ └── custom-crs/ 6 │ └── version_4.14 7 │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── helix56-v414.yaml │ ├── kustomization.yaml 8 │ ├── ns.yaml │ └── source-crs/ 9 │ └── reference-crs/ 10 │ └── custom-crs/ 11 └── siteconfig ├── kustomization.yaml ├── version_4.13 │ ├── helix56-v413.yaml │ ├── kustomization.yaml │ ├── extra-manifest/ 12 │ └── custom-manifest/ 13 └── version_4.14 ├── helix57-v414.yaml ├── kustomization.yaml ├── extra-manifest/ 14 └── custom-manifest/ 15 1 Create a top-level kustomization YAML file. 2 7 Create the version-specific directories within the custom /policygentemplates directory. 3 8 Create a kustomization.yaml file for each version. 4 9 Create a source-crs directory for each version to contain reference CRs from the ztp-site-generate container. 5 10 Create the reference-crs directory for policy CRs that are extracted from the ZTP container. 6 11 Optional: Create a custom-crs directory for user-provided CRs. 12 14 Create a directory within the custom /siteconfig directory to contain extra manifests from the ztp-site-generate container. 13 15 Create a folder to hold user-provided manifests. Note In the example, each version subdirectory in the custom /siteconfig directory contains two further subdirectories, one containing the reference manifests copied from the container, the other for custom manifests that you provide. The names assigned to those directories are examples. If you use user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing user-provided CRs. Edit the SiteConfig CR to include the search paths of any directories you have created. The first directory that is listed under extraManifests.searchPaths must be the directory containing the reference manifests. Consider the order in which the directories are listed. In cases where directories contain files with the same name, the file in the final directory takes precedence. Example SiteConfig CR extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2 1 The directory containing the reference manifests must be listed first under extraManifests.searchPaths . 2 If you are using user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing those user-provided CRs. Edit the top-level kustomization.yaml file to control which OpenShift Container Platform versions are active. The following is an example of a kustomization.yaml file at the top level: resources: - version_4.13 1 #- version_4.14 2 1 Activate version 4.13. 2 Use comments to deactivate a version. 22.3. Updating GitOps ZTP You can update the GitOps Zero Touch Provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters. Note You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements. 22.3.1. Overview of the GitOps ZTP update process You can update GitOps Zero Touch Provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters. Note Any changes to policy settings, including adding recommended content, results in updated polices that must be rolled out to the managed clusters and reconciled. At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows: Label all existing clusters with the ztp-done label. Stop the ArgoCD applications. Install the new GitOps ZTP tools. Update required content and optional changes in the Git repository. Update and restart the application configuration. 22.3.2. Preparing for the upgrade Use the following procedure to prepare your site for the GitOps Zero Touch Provisioning (ZTP) upgrade. Procedure Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP. Extract the argocd/deployment directory by using the following commands: USD mkdir -p ./update USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./update The /update directory contains the following subdirectories: update/extra-manifest : contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap . update/source-crs : contains the source CR files that the PolicyGenTemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. update/argocd/deployment : contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. update/argocd/example : contains example SiteConfig and PolicyGenTemplate files that represent the recommended configuration. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository. If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository. Important When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files. 22.3.3. Labeling the existing clusters To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label. Note This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done . Procedure Find a label selector that lists the managed clusters that were deployed with GitOps Zero Touch Provisioning (ZTP), such as local-cluster!=true : USD oc get managedcluster -l 'local-cluster!=true' Ensure that the resulting list contains all the managed clusters that were deployed with GitOps ZTP, and then use that selector to add the ztp-done label: USD oc label managedcluster -l 'local-cluster!=true' ztp-done= 22.3.4. Stopping the existing GitOps ZTP applications Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available. Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first. Procedure Perform a non-cascaded delete on the clusters application to leave all generated resources in place: USD oc delete -f update/argocd/deployment/clusters-app.yaml Perform a cascaded delete on the policies application to remove all policies: USD oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge USD oc delete -f update/argocd/deployment/policies-app.yaml 22.3.5. Required changes to the Git repository When upgrading the ztp-site-generate container from an earlier release of GitOps Zero Touch Provisioning (ZTP) to 4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes. Make required changes to PolicyGenTemplate files: All PolicyGenTemplate files must be created in a Namespace prefixed with ztp . This ensures that the GitOps ZTP application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally. Add the kustomization.yaml file to the repository: All SiteConfig and PolicyGenTemplate CRs must be included in a kustomization.yaml file under their respective directory trees. For example: ├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml Note The files listed in the generator sections must contain either SiteConfig or PolicyGenTemplate CRs only. If your existing YAML files contain other CRs, for example, Namespace , these other CRs must be pulled out into separate files and listed in the resources section. The PolicyGenTemplate kustomization file must contain all PolicyGenTemplate YAML files in the generator section and Namespace CRs in the resources section. For example: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml Remove the pre-sync.yaml and post-sync.yaml files. In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster. Note There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and PolicyGenTemplate trees. Review and incorporate recommended changes Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform. Review the reference SiteConfig and PolicyGenTemplate CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container. 22.3.6. Installing the new GitOps ZTP applications Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured. Procedure To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment. Select the multicluster-operators-subscription image that matches your RHACM version. For RHACM 2.8 and 2.9, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version> image. For RHACM 2.10 and later, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version> image. Important The version of the multicluster-operators-subscription image must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images. Click [Expand for Operator list] in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform. Add the following configuration to the out/argocd/deployment/argocd-openshift-gitops-patch.json file: { "args": [ "-c", "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1 ], "command": [ "/bin/bash" ], "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3 "name": "policy-generator-install", "imagePullPolicy": "Always", "volumeMounts": [ { "mountPath": "/.config", "name": "kustomize" } ] } 1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. 2 Match the multicluster-operators-subscription image to the RHACM version. 3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. Patch the ArgoCD instance. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by running the following command: USD oc apply -k out/argocd/deployment 22.3.7. Rolling out the GitOps ZTP configuration changes If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the GitOps Zero Touch Provisioning (ZTP) version 4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently. To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update. Additional resources For information about the Topology Aware Lifecycle Manager (TALM), see About the Topology Aware Lifecycle Manager configuration . For information about creating ClusterGroupUpgrade CRs, see About the auto-created ClusterGroupUpgrade CR for ZTP . 22.4. Installing managed clusters with RHACM and SiteConfig resources You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment. 22.4.1. GitOps ZTP and Topology Aware Lifecycle Manager GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM. Inform policies By default, GitOps ZTP creates all policies with a remediation action of inform . These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the created inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. Automatic creation of ClusterGroupUpgrade CRs To automate the initial configuration of newly deployed clusters, TALM monitors the state of all ManagedCluster CRs on the hub cluster. Any ManagedCluster CR that does not have a ztp-done label applied, including newly created ManagedCluster CRs, causes the TALM to automatically create a ClusterGroupUpgrade CR with the following characteristics: The ClusterGroupUpgrade CR is created and enabled in the ztp-install namespace. ClusterGroupUpgrade CR has the same name as the ManagedCluster CR. The cluster selector includes only the cluster associated with that ManagedCluster CR. The set of managed policies includes all policies that RHACM has bound to the cluster at the time the ClusterGroupUpgrade is created. Pre-caching is disabled. Timeout set to 4 hours (240 minutes). The automatic creation of an enabled ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a ClusterGroupUpgrade CR for any ManagedCluster without the ztp-done label allows a failed GitOps ZTP installation to be restarted by simply deleting the ClusterGroupUpgrade CR for the cluster. Waves Each policy generated from a PolicyGenTemplate CR includes a ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated ClusterGroupUpgrade CR. Note All CRs in the same policy must have the same setting for the ztp-deploy-wave annotation. The default value of this annotation for each CR can be overridden in the PolicyGenTemplate . The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime. The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account. Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves. To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image: USD grep -r "ztp-deploy-wave" out/source-crs Phase labels The ClusterGroupUpgrade CR is automatically created and includes directives to annotate the ManagedCluster CR with labels at the start and end of the GitOps ZTP process. When GitOps ZTP configuration postinstallation commences, the ManagedCluster has the ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the ztp-running label and apply the ztp-done label. For deployments that make use of the informDuValidator policy, the ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. The ztp-done label affects automatic ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster. Linked CRs The automatically created ClusterGroupUpgrade CR has the owner reference set as the ManagedCluster from which it was derived. This reference ensures that deleting the ManagedCluster CR causes the instance of the ClusterGroupUpgrade to be deleted along with any supporting resources. 22.4.2. Overview of deploying managed clusters with GitOps ZTP Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters. The deployment of the clusters includes: Installing the host operating system (RHCOS) on a blank server Deploying OpenShift Container Platform Creating cluster policies and site subscriptions Making the necessary network configurations to the server operating system Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV Overview of the managed site installation process After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target host. When the ISO file successfully boots on the target host it reports the host hardware information to RHACM. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster. Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads . 22.4.3. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 22.4.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.14, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create the InfraEnv CR and edit the spec.kernelArguments specification to configure kernel arguments. Save the following YAML in an InfraEnv-example.yaml file: Note The InfraEnv CR in this example uses template syntax such as {{ .Cluster.ClusterName }} that is populated based on values in the SiteConfig CR. The SiteConfig CR automatically populates values for these templates during deployment. Do not edit the templates manually. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}" 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Commit the InfraEnv-example.yaml CR to the same location in your Git repository that has the SiteConfig CR and push your changes. The following example shows a sample Git repository structure: ~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ... Edit the spec.clusters.crTemplates specification in the SiteConfig CR to reference the InfraEnv-example.yaml CR in your Git repository: clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml" When you are ready to deploy your cluster by committing and pushing the SiteConfig CR, the build pipeline uses the custom InfraEnv-example CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 22.4.5. Deploying a managed cluster with SiteConfig and GitOps ZTP Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information. Note When you create the source repository, ensure that you patch the ArgoCD application with the argocd/deployment/argocd-openshift-gitops-patch.json patch-file that you extract from the ztp-site-generate container. See "Configuring the hub cluster with ArgoCD". To be ready for provisioning managed clusters, you require the following for each bare-metal host: Network connectivity Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host. Baseboard Management Controller (BMC) details GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the ManagedCluster CRs on the hub cluster based on the SiteConfig CR in your site Git repo. You create individual BMCSecret CRs for each host manually. Procedure Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in out/argocd/example/siteconfig/example-sno.yaml , the cluster name and namespace is example-sno . Export the cluster namespace by running the following command: USD export CLUSTERNS=example-sno Create the namespace: USD oc create namespace USDCLUSTERNS Create pull secret and BMC Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information. Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. Create a SiteConfig CR for your cluster in your local clone of the Git repository: Choose the appropriate example for your CR from the out/argocd/example/siteconfig/ folder. The folder includes example files for single node, three-node, and standard clusters: example-sno.yaml example-3node.yaml example-standard.yaml Change the cluster and host details in the example file to match the type of cluster you want. For example: Example single-node OpenShift SiteConfig CR # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Note For more information about BMC addressing, see the "Additional resources" section. The installConfigOverrides and ignitionConfigOverride fields are expanded in the example for ease of readability. You can inspect the default set of extra-manifest MachineConfig CRs in out/argocd/extra-manifest . It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, sno-extra-manifest/ , and add your custom manifest CRs to this directory. If your SiteConfig.yaml refers to this directory in the extraManifestPath field, any CRs in this referenced directory are appended to the default set of extra manifests. Enabling the crun OCI container runtime For optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters. Enable crun in a ContainerRuntimeConfig CR as an additional Day 0 install-time manifest to avoid the cluster having to reboot. The enable-crun-master.yaml and enable-crun-worker.yaml CR files are in the out/source-crs/optional-extra-manifest/ folder that you can extract from the ztp-site-generate container. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline". Add the SiteConfig CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/siteconfig/kustomization.yaml . Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Verification Verify that the custom roles and labels are applied after the node is deployed: USD oc describe node example-node.example.com Example output Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos 1 The custom label is applied to the node. Additional resources Single-node OpenShift SiteConfig CR installation reference 22.4.5.1. Single-node OpenShift SiteConfig CR installation reference Table 22.6. SiteConfig CR installation options for single-node OpenShift clusters SiteConfig CR field Description spec.cpuPartitioningMode Configure workload partitioning by setting the value for cpuPartitioningMode to AllNodes . To complete the configuration, specify the isolated and reserved CPUs in the PerformanceProfile CR. Note Configuring workload partitioning by using the cpuPartitioningMode field in the SiteConfig CR is a Tech Preview feature in OpenShift Container Platform 4.13. metadata.name Set name to assisted-deployment-pull-secret and create the assisted-deployment-pull-secret CR in the same namespace as the SiteConfig CR. spec.clusterImageSetNameRef Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . installConfigOverrides Set the installConfigOverrides field to enable or disable optional components prior to cluster installation. Important Use the reference configuration as specified in the example SiteConfig CR. Adding additional components back into the system might require additional reserved CPU capacity. spec.clusters.clusterImageSetNameRef Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at site level. spec.clusters.clusterLabels Configure cluster labels to correspond to the bindingRules field in the PolicyGenTemplate CRs that you define. For example, policygentemplates/common-ranGen.yaml applies to all clusters with common: true set, policygentemplates/group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. spec.clusters.crTemplates.KlusterletAddonConfig Optional. Set KlusterletAddonConfig to KlusterletAddonConfigOverride.yaml to override the default `KlusterletAddonConfig that is created for the cluster. spec.clusters.nodes.hostName For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . spec.clusters.nodes.nodeLabels Specify custom roles for your nodes in your managed clusters. These are additional roles are not used by any OpenShift Container Platform components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. spec.clusters.nodes.automatedCleaningMode Optional. Uncomment and set the value to metadata to enable the removal of the disk's partitioning table only, without fully wiping the disk. The default value is disabled . spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. spec.clusters.nodes.bmcCredentialsName Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. spec.clusters.nodes.bootMode Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. spec.clusters.nodes.rootDeviceHints Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . <by-path> values are preferred. For a detailed list of stable identifiers, see the "About root device hints" section. spec.clusters.nodes.ignitionConfigOverride Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. spec.clusters.nodes.nodeNetwork Configure the network settings for the node. spec.clusters.nodes.nodeNetwork.config.interfaces.ipv6 Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Additional resources Customizing extra installation manifests in the GitOps ZTP pipeline Preparing the GitOps ZTP site configuration repository Configuring the hub cluster with ArgoCD Signalling ZTP cluster deployment completion with validator inform policies Creating the managed bare-metal host secrets BMC addressing About root device hints 22.4.6. Monitoring managed cluster installation progress The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure When the synchronization is complete, the installation generally proceeds as follows: The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands: Export the cluster name: USD export CLUSTER=<clusterName> Query the AgentClusterInstall CR for the managed cluster: USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq Get the installation events for the cluster: USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' 22.4.7. Troubleshooting GitOps ZTP by validating the installation CRs The ArgoCD pipeline uses the SiteConfig and PolicyGenTemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Check that the installation CRs were created by using the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from SiteConfig files to the installation CRs. Verify that the ManagedCluster CR was generated using the SiteConfig CR on the hub cluster: USD oc get managedcluster If the ManagedCluster is missing, check if the clusters application failed to synchronize the files from the Git repository to the hub cluster: USD oc get applications.argoproj.io -n openshift-gitops clusters -o yaml To identify error logs for the managed cluster, inspect the status.operationState.syncResult.resources field. For example, if an invalid value is assigned to the extraManifestPath in the SiteConfig CR, an error similar to the following is generated: syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the "SiteConfig" CRD is installed on the destination cluster To see a more detailed SiteConfig error, complete the following steps: In the Argo CD dashboard, click the SiteConfig resource that Argo CD is trying to sync. Check the DESIRED MANIFEST tab to find the siteConfigError field. siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory Check the Status.Sync field. If there are log errors, the Status.Sync field could indicate an Unknown error: Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown 22.4.8. Troubleshooting GitOps ZTP virtual media booting on Supermicro servers SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Disable TLS in the Provisioning resource by running the following command: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' Continue the steps to deploy your single-node OpenShift cluster. 22.4.9. Removing a managed cluster site from the GitOps ZTP pipeline You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove a site and the associated CRs by removing the associated SiteConfig and PolicyGenTemplate files from the kustomization.yaml file. Add the following syncOptions field to your SiteConfig application: kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background When you run the GitOps ZTP pipeline again, the generated CRs are removed. Optional: If you want to permanently remove a site, you should also remove the SiteConfig and site-specific PolicyGenTemplate files from the Git repository. Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the SiteConfig and site-specific PolicyGenTemplate CRs in the Git repository. Additional resources For information about removing a cluster, see Removing a cluster from management . 22.4.10. Removing obsolete content from the GitOps ZTP pipeline If a change to the PolicyGenTemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove the affected PolicyGenTemplate files from the Git repository, commit and push to the remote repository. Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster. Add the updated PolicyGenTemplate files back to the Git repository, and then commit and push to the remote repository. Note Removing GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster. Optional: As an alternative, after making changes to PolicyGenTemplate CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command: USD oc delete policy -n <namespace> <policy_name> 22.4.11. Tearing down the GitOps ZTP pipeline You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster. Delete the kustomization.yaml file in the deployment directory using the following command: USD oc delete -k out/argocd/deployment Commit and push your changes to the site repository. 22.5. Configuring managed clusters with policies and PolicyGenTemplate resources Applied policy custom resources (CRs) configure the managed clusters that you provision. You can customize how Red Hat Advanced Cluster Management (RHACM) uses PolicyGenTemplate CRs to generate the applied policy CRs. 22.5.1. About the PolicyGenTemplate CRD The PolicyGenTemplate custom resource definition (CRD) tells the PolicyGen policy generator what custom resources (CRs) to include in the cluster configuration, how to combine the CRs into the generated policies, and what items in those CRs need to be updated with overlay content. The following example shows a PolicyGenTemplate CR ( common-du-ranGen.yaml ) extracted from the ztp-site-generate reference container. The common-du-ranGen.yaml file defines two Red Hat Advanced Cluster Management (RHACM) policies. The polices manage a collection of configuration CRs, one for each unique value of policyName in the CR. common-du-ranGen.yaml creates a single placement binding and a placement rule to bind the policies to clusters based on the labels listed in the bindingRules section. Example PolicyGenTemplate CR - common-du-ranGen.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "common" namespace: "ztp-common" spec: bindingRules: common: "true" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: "subscriptions-policy" - fileName: SriovSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: SriovSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: SriovOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: PtpSubscription.yaml policyName: "subscriptions-policy" - fileName: PtpSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: PtpSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: PtpOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: ClusterLogNS.yaml policyName: "subscriptions-policy" - fileName: ClusterLogOperGroup.yaml policyName: "subscriptions-policy" - fileName: ClusterLogSubscription.yaml policyName: "subscriptions-policy" - fileName: ClusterLogOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: StorageNS.yaml policyName: "subscriptions-policy" - fileName: StorageOperGroup.yaml policyName: "subscriptions-policy" - fileName: StorageSubscription.yaml policyName: "subscriptions-policy" - fileName: StorageOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: ReduceMonitoringFootprint.yaml policyName: "config-policy" - fileName: OperatorHub.yaml 3 policyName: "config-policy" - fileName: DefaultCatsrc.yaml 4 policyName: "config-policy" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: "config-policy" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io 1 common: "true" applies the policies to all clusters with this label. 2 Files listed under sourceFiles create the Operator policies for installed clusters. 3 OperatorHub.yaml configures the OperatorHub for the disconnected registry. 4 DefaultCatsrc.yaml configures the catalog source for the disconnected registry. 5 policyName: "config-policy" configures Operator subscriptions. The OperatorHub CR disables the default and this CR replaces redhat-operators with a CatalogSource CR that points to the disconnected registry. A PolicyGenTemplate CR can be constructed with any number of included CRs. Apply the following example CR in the hub cluster to generate a policy containing a single CR: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno" namespace: "ztp-group" spec: bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f0" ptp4lOpts: "-2 -s --summary_interval -4" phc2sysOpts: "-a -r -n 24" Using the source file PtpConfigSlave.yaml as an example, the file defines a PtpConfig CR. The generated policy for the PtpConfigSlave example is named group-du-sno-config-policy . The PtpConfig CR defined in the generated group-du-sno-config-policy is named du-ptp-slave . The spec defined in PtpConfigSlave.yaml is placed under du-ptp-slave along with the other spec items defined under the source file. The following example shows the group-du-sno-config-policy CR: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..... 22.5.2. Recommendations when customizing PolicyGenTemplate CRs Consider the following best practices when customizing site configuration PolicyGenTemplate custom resources (CRs): Use as few policies as are necessary. Using fewer policies requires less resources. Each additional policy creates overhead for the hub cluster and the deployed managed cluster. CRs are combined into policies based on the policyName field in the PolicyGenTemplate CR. CRs in the same PolicyGenTemplate which have the same value for policyName are managed under a single policy. In disconnected environments, use a single catalog source for all Operators by configuring the registry as a single index containing all Operators. Each additional CatalogSource CR on the managed clusters increases CPU usage. MachineConfig CRs should be included as extraManifests in the SiteConfig CR so that they are applied during installation. This can reduce the overall time taken until the cluster is ready to deploy applications. PolicyGenTemplates should override the channel field to explicitly identify the desired version. This ensures that changes in the source CR during upgrades does not update the generated subscription. Additional resources For recommendations about scaling clusters with RHACM, see Performance and scalability . Note When managing large numbers of spoke clusters on the hub cluster, minimize the number of policies to reduce resource consumption. Grouping multiple configuration CRs into a single or limited number of policies is one way to reduce the overall number of policies on the hub cluster. When using the common, group, and site hierarchy of policies for managing site configuration, it is especially important to combine site-specific configuration into a single policy. 22.5.3. PolicyGenTemplate CRs for RAN deployments Use PolicyGenTemplate (PGT) custom resources (CRs) to customize the configuration applied to the cluster by using the GitOps Zero Touch Provisioning (ZTP) pipeline. The PGT CR allows you to generate one or more policies to manage the set of configuration CRs on your fleet of clusters. The PGT identifies the set of managed CRs, bundles them into policies, builds the policy wrapping around those CRs, and associates the policies with clusters by using label binding rules. The reference configuration, obtained from the GitOps ZTP container, is designed to provide a set of critical features and node tuning settings that ensure the cluster can support the stringent performance and resource utilization constraints typical of RAN (Radio Access Network) Distributed Unit (DU) applications. Changes or omissions from the baseline configuration can affect feature availability, performance, and resource utilization. Use the reference PolicyGenTemplate CRs as the basis to create a hierarchy of configuration files tailored to your specific site requirements. The baseline PolicyGenTemplate CRs that are defined for RAN DU cluster configuration can be extracted from the GitOps ZTP ztp-site-generate container. See "Preparing the GitOps ZTP site configuration repository" for further details. The PolicyGenTemplate CRs can be found in the ./out/argocd/example/policygentemplates folder. The reference architecture has common, group, and site-specific configuration CRs. Each PolicyGenTemplate CR refers to other CRs that can be found in the ./out/source-crs folder. The PolicyGenTemplate CRs relevant to RAN cluster configuration are described below. Variants are provided for the group PolicyGenTemplate CRs to account for differences in single-node, three-node compact, and standard cluster configurations. Similarly, site-specific configuration variants are provided for single-node clusters and multi-node (compact or standard) clusters. Use the group and site-specific configuration variants that are relevant for your deployment. Table 22.7. PolicyGenTemplate CRs for RAN deployments PolicyGenTemplate CR Description example-multinode-site.yaml Contains a set of CRs that get applied to multi-node clusters. These CRs configure SR-IOV features typical for RAN installations. example-sno-site.yaml Contains a set of CRs that get applied to single-node OpenShift clusters. These CRs configure SR-IOV features typical for RAN installations. common-ranGen.yaml Contains a set of common RAN CRs that get applied to all clusters. These CRs subscribe to a set of operators providing cluster features typical for RAN as well as baseline cluster tuning. group-du-3node-ranGen.yaml Contains the RAN policies for three-node clusters only. group-du-sno-ranGen.yaml Contains the RAN policies for single-node clusters only. group-du-standard-ranGen.yaml Contains the RAN policies for standard three control-plane clusters. group-du-3node-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for three-node clusters. group-du-standard-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for standard clusters. group-du-sno-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for single-node OpenShift clusters. Additional resources Preparing the GitOps ZTP site configuration repository 22.5.4. Customizing a managed cluster with PolicyGenTemplate CRs Use the following procedure to customize the policies that get applied to the managed cluster that you provision using the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a PolicyGenTemplate CR for site-specific configuration CRs. Choose the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, example-sno-site.yaml or example-multinode-site.yaml . Change the bindingRules field in the example file to match the site-specific label included in the SiteConfig CR. In the example SiteConfig file, the site-specific label is sites: example-sno . Note Ensure that the labels defined in your PolicyGenTemplate bindingRules field correspond to the labels that are defined in the related managed clusters SiteConfig CR. Change the content in the example file to match the desired configuration. Optional: Create a PolicyGenTemplate CR for any common configuration CRs that apply to the entire fleet of clusters. Select the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, common-ranGen.yaml . Change the content in the example file to match the desired configuration. Optional: Create a PolicyGenTemplate CR for any group configuration CRs that apply to the certain groups of clusters in the fleet. Ensure that the content of the overlaid spec files matches your desired end state. As a reference, the out/source-crs directory contains the full list of source-crs available to be included and overlaid by your PolicyGenTemplate templates. Note Depending on the specific requirements of your clusters, you might need more than a single group policy per cluster type, especially considering that the example group policies each have a single PerformancePolicy.yaml file that can only be shared across a set of clusters if those clusters consist of identical hardware configurations. Select the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, group-du-sno-ranGen.yaml . Change the content in the example file to match the desired configuration. Optional. Create a validator inform policy PolicyGenTemplate CR to signal when the GitOps ZTP installation and configuration of the deployed cluster is complete. For more information, see "Creating a validator inform policy". Define all the policy namespaces in a YAML file similar to the example out/argocd/example/policygentemplates/ns.yaml file. Important Do not include the Namespace CR in the same file with the PolicyGenTemplate CR. Add the PolicyGenTemplate CRs and Namespace CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/policygentemplates/kustomization.yaml . Commit the PolicyGenTemplate CRs, Namespace CR, and associated kustomization.yaml file in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. You can push the changes to the SiteConfig CR and the PolicyGenTemplate CR simultaneously. Additional resources Signalling ZTP cluster deployment completion with validator inform policies 22.5.5. Monitoring managed cluster policy deployment progress The ArgoCD pipeline uses PolicyGenTemplate CRs in Git to generate the RHACM policies and then sync them to the hub cluster. You can monitor the progress of the managed cluster policy synchronization after the assisted service installs OpenShift Container Platform on the managed cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure The Topology Aware Lifecycle Manager (TALM) applies the configuration policies that are bound to the cluster. After the cluster installation is complete and the cluster becomes Ready , a ClusterGroupUpgrade CR corresponding to this cluster, with a list of ordered policies defined by the ran.openshift.io/ztp-deploy-wave annotations , is automatically created by the TALM. The cluster's policies are applied in the order listed in ClusterGroupUpgrade CR. You can monitor the high-level progress of configuration policy reconciliation by using the following commands: USD export CLUSTER=<clusterName> USD oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq Example output { "lastTransitionTime": "2022-11-09T07:28:09Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } You can monitor the detailed cluster policy compliance status by using the RHACM dashboard or the command line. To check policy compliance by using oc , run the following command: USD oc get policies -n USDCLUSTER Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m To check policy status from the RHACM web console, perform the following actions: Click Governance Find policies . Click on a cluster policy to check it's status. When all of the cluster policies become compliant, GitOps ZTP installation and configuration for the cluster is complete. The ztp-done label is added to the cluster. In the reference configuration, the final policy that becomes compliant is the one defined in the *-du-validator-policy policy. This policy, when compliant on a cluster, ensures that all cluster configuration, Operator installation, and Operator configuration is complete. 22.5.6. Validating the generation of configuration policy CRs Policy custom resources (CRs) are generated in the same namespace as the PolicyGenTemplate from which they are created. The same troubleshooting flow applies to all policy CRs generated from a PolicyGenTemplate regardless of whether they are ztp-common , ztp-group , or ztp-site based, as shown using the following commands: USD export NS=<namespace> USD oc get policy -n USDNS The expected set of policy-wrapped CRs should be displayed. If the policies failed synchronization, use the following troubleshooting steps. Procedure To display detailed information about the policies, run the following command: USD oc describe -n openshift-gitops application policies Check for Status: Conditions: to show the error logs. For example, setting an invalid sourceFile->fileName: generates the error shown below: Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError Check for Status: Sync: . If there are log errors at Status: Conditions: , the Status: Sync: shows Unknown or Error : Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error When Red Hat Advanced Cluster Management (RHACM) recognizes that policies apply to a ManagedCluster object, the policy CR objects are applied to the cluster namespace. Check to see if the policies were copied to the cluster namespace: USD oc get policy -n USDCLUSTER Example output: NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d RHACM copies all applicable policies into the cluster namespace. The copied policy names have the format: <policyGenTemplate.Namespace>.<policyGenTemplate.Name>-<policyName> . Check the placement rule for any policies not copied to the cluster namespace. The matchSelector in the PlacementRule for those policies should match labels on the ManagedCluster object: USD oc get placementrule -n USDNS Note the PlacementRule name appropriate for the missing policy, common, group, or site, using the following command: USD oc get placementrule -n USDNS <placementRuleName> -o yaml The status-decisions should include your cluster name. The key-value pair of the matchSelector in the spec must match the labels on your managed cluster. Check the labels on the ManagedCluster object using the following command: USD oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq Check to see which policies are compliant using the following command: USD oc get policy -n USDCLUSTER If the Namespace , OperatorGroup , and Subscription policies are compliant but the Operator configuration policies are not, it is likely that the Operators did not install on the managed cluster. This causes the Operator configuration policies to fail to apply because the CRD is not yet applied to the spoke. 22.5.7. Restarting policy reconciliation You can restart policy reconciliation when unexpected compliance issues occur, for example, when the ClusterGroupUpgrade custom resource (CR) has timed out. Procedure A ClusterGroupUpgrade CR is generated in the namespace ztp-install by the Topology Aware Lifecycle Manager after the managed cluster becomes Ready : USD export CLUSTER=<clusterName> USD oc get clustergroupupgrades -n ztp-install USDCLUSTER If there are unexpected issues and the policies fail to become complaint within the configured timeout (the default is 4 hours), the status of the ClusterGroupUpgrade CR shows UpgradeTimedOut : USD oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Ready")]}' A ClusterGroupUpgrade CR in the UpgradeTimedOut state automatically restarts its policy reconciliation every hour. If you have changed your policies, you can start a retry immediately by deleting the existing ClusterGroupUpgrade CR. This triggers the automatic creation of a new ClusterGroupUpgrade CR that begins reconciling the policies immediately: USD oc delete clustergroupupgrades -n ztp-install USDCLUSTER Note that when the ClusterGroupUpgrade CR completes with status UpgradeCompleted and the managed cluster has the label ztp-done applied, you can make additional configuration changes using PolicyGenTemplate . Deleting the existing ClusterGroupUpgrade CR will not make the TALM generate a new CR. At this point, GitOps ZTP has completed its interaction with the cluster and any further interactions should be treated as an update and a new ClusterGroupUpgrade CR created for remediation of the policies. Additional resources For information about using Topology Aware Lifecycle Manager (TALM) to construct your own ClusterGroupUpgrade CR, see About the ClusterGroupUpgrade CR . 22.5.8. Changing applied managed cluster CRs using policies You can remove content from a custom resource (CR) that is deployed in a managed cluster through a policy. By default, all Policy CRs created from a PolicyGenTemplate CR have the complianceType field set to musthave . A musthave policy without the removed content is still compliant because the CR on the managed cluster has all the specified content. With this configuration, when you remove content from a CR, TALM removes the content from the policy but the content is not removed from the CR on the managed cluster. With the complianceType field to mustonlyhave , the policy ensures that the CR on the cluster is an exact match of what is specified in the policy. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have deployed a managed cluster from a hub cluster running RHACM. You have installed Topology Aware Lifecycle Manager on the hub cluster. Procedure Remove the content that you no longer need from the affected CRs. In this example, the disableDrain: false line was removed from the SriovOperatorConfig CR. Example CR apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" disableDrain: true enableInjector: true enableOperatorWebhook: true Change the complianceType of the affected policies to mustonlyhave in the group-du-sno-ranGen.yaml file. Example YAML # ... - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave # ... Create a ClusterGroupUpdates CR and specify the clusters that must receive the CR changes:: Example ClusterGroupUpdates CR apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction: Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-remove.yaml When you are ready to apply the changes, for example, during an appropriate maintenance window, change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the policies by running the following command: USD oc get <kind> <changed_cr_name> Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h When the COMPLIANCE STATE of the policy is Compliant , it means that the CR is updated and the unwanted content is removed. Check that the policies are removed from the targeted clusters by running the following command on the managed clusters: USD oc get <kind> <changed_cr_name> If there are no results, the CR is removed from the managed cluster. 22.5.9. Indication of done for GitOps ZTP installations GitOps Zero Touch Provisioning (ZTP) simplifies the process of checking the GitOps ZTP installation status for a cluster. The GitOps ZTP status moves through three phases: cluster installation, cluster configuration, and GitOps ZTP done. Cluster installation phase The cluster installation phase is shown by the ManagedClusterJoined and ManagedClusterAvailable conditions in the ManagedCluster CR . If the ManagedCluster CR does not have these conditions, or the condition is set to False , the cluster is still in the installation phase. Additional details about installation are available from the AgentClusterInstall and ClusterDeployment CRs. For more information, see "Troubleshooting GitOps ZTP". Cluster configuration phase The cluster configuration phase is shown by a ztp-running label applied the ManagedCluster CR for the cluster. GitOps ZTP done Cluster installation and configuration is complete in the GitOps ZTP done phase. This is shown by the removal of the ztp-running label and addition of the ztp-done label to the ManagedCluster CR. The ztp-done label shows that the configuration has been applied and the baseline DU configuration has completed cluster tuning. The transition to the GitOps ZTP done state is conditional on the compliant state of a Red Hat Advanced Cluster Management (RHACM) validator inform policy. This policy captures the existing criteria for a completed installation and validates that it moves to a compliant state only when GitOps ZTP provisioning of the managed cluster is complete. The validator inform policy ensures the configuration of the cluster is fully applied and Operators have completed their initialization. The policy validates the following: The target MachineConfigPool contains the expected entries and has finished updating. All nodes are available and not degraded. The SR-IOV Operator has completed initialization as indicated by at least one SriovNetworkNodeState with syncStatus: Succeeded . The PTP Operator daemon set exists. 22.6. Manually installing a single-node OpenShift cluster with ZTP You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service. Note If you are creating multiple managed clusters, use the SiteConfig method described in Deploying far edge sites with ZTP . Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads . 22.6.1. Generating GitOps ZTP installation and configuration CRs manually Use the generator entrypoint for the ztp-site-generate container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig and PolicyGenTemplate CRs. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create an output folder by running the following command: USD mkdir -p ./out Export the argocd directory from the ztp-site-generate container image: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out The ./out directory has the reference PolicyGenTemplate and SiteConfig CRs in the out/argocd/example/ folder. Example output out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml Create an output folder for the site installation CRs: USD mkdir -p ./site-install Modify the example SiteConfig CR for the cluster type that you want to install. Copy example-sno.yaml to site-1-sno.yaml and modify the CR to match the details of the site and bare-metal host that you want to install, for example: # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Note Once you have extracted reference CR configuration files from the out/extra-manifest directory of the ztp-site-generate container, you can use extraManifests.searchPaths to include the path to the git directory containing those files. This allows the GitOps ZTP pipeline to apply those CR files during cluster installation. If you configure a searchPaths directory, the GitOps ZTP pipeline does not fetch manifests from the ztp-site-generate container during site installation. Generate the Day 0 installation CRs by processing the modified SiteConfig CR site-1-sno.yaml by running the following command: USD podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator install site-1-sno.yaml /output Example output site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml Optional: Generate just the Day 0 MachineConfig installation CRs for a particular cluster type by processing the reference SiteConfig CR with the -E option. For example, run the following commands: Create an output folder for the MachineConfig CRs: USD mkdir -p ./site-machineconfig Generate the MachineConfig installation CRs: USD podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator install -E site-1-sno.yaml /output Example output site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml Generate and export the Day 2 configuration CRs using the reference PolicyGenTemplate CRs from the step. Run the following commands: Create an output folder for the Day 2 CRs: USD mkdir -p ./ref Generate and export the Day 2 configuration CRs: USD podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator config -N . /output The command generates example group and site-specific PolicyGenTemplate CRs for single-node OpenShift, three-node clusters, and standard clusters in the ./ref folder. Example output ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete. Verification Verify that the custom roles and labels are applied after the node is deployed: USD oc describe node example-node.example.com Example output Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos 1 The custom label is applied to the node. Additional resources Workload partitioning BMC addressing About root device hints Single-node OpenShift SiteConfig CR installation reference 22.6.2. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 22.6.3. Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.14, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. You have manually generated the installation and configuration custom resources (CRs). Procedure Edit the spec.kernelArguments specification in the InfraEnv CR to configure kernel arguments: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Note The SiteConfig CR generates the InfraEnv resource as part of the day-0 installation CRs. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 22.6.4. Installing a single managed cluster You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM). Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created the baseboard management controller (BMC) Secret and the image pull-secret Secret custom resources (CRs). See "Creating the managed bare-metal host secrets" for details. Your target bare-metal host meets the networking and hardware requirements for managed clusters. Procedure Create a ClusterImageSet for each specific cluster version to be deployed, for example clusterImageSet-4.14.yaml . A ClusterImageSet has the following format: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.14.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64 2 1 The descriptive version that you want to deploy. 2 Specifies the releaseImage to deploy and determines the operating system image version. The discovery ISO is based on the image version as set by releaseImage , or the latest version if the exact version is unavailable. Apply the clusterImageSet CR: USD oc apply -f clusterImageSet-4.14.yaml Create the Namespace CR in the cluster-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2 1 2 The name of the managed cluster to provision. Apply the Namespace CR by running the following command: USD oc apply -f cluster-namespace.yaml Apply the generated day-0 CRs that you extracted from the ztp-site-generate container and customized to meet your requirements: USD oc apply -R ./site-install/site-sno-1 Additional resources Connectivity prerequisites for managed cluster networks Deploying LVM Storage on single-node OpenShift clusters Configuring LVM Storage using PolicyGenTemplate CRs 22.6.5. Monitoring the managed cluster installation status Ensure that cluster provisioning was successful by checking the cluster status. Prerequisites All of the custom resources have been configured and provisioned, and the Agent custom resource is created on the hub for the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster True indicates the managed cluster is ready. Check the agent status: USD oc get agent -n <cluster_name> Use the describe command to provide an in-depth description of the agent's condition. Statuses to be aware of include BackendError , InputError , ValidationsFailing , InstallationFailed , and AgentIsConnected . These statuses are relevant to the Agent and AgentClusterInstall custom resources. USD oc describe agent -n <cluster_name> Check the cluster provisioning status: USD oc get agentclusterinstall -n <cluster_name> Use the describe command to provide an in-depth description of the cluster provisioning status: USD oc describe agentclusterinstall -n <cluster_name> Check the status of the managed cluster's add-on services: USD oc get managedclusteraddon -n <cluster_name> Retrieve the authentication information of the kubeconfig file for the managed cluster: USD oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig 22.6.6. Troubleshooting the managed cluster Use this procedure to diagnose any installation issues that might occur with the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h If the status in the AVAILABLE column is True , the managed cluster is being managed by the hub. If the status in the AVAILABLE column is Unknown , the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information. Check the AgentClusterInstall install status: USD oc get clusterdeployment -n <cluster_name> Example output NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h If the status in the INSTALLED column is false , the installation was unsuccessful. If the installation failed, enter the following command to review the status of the AgentClusterInstall resource: USD oc describe agentclusterinstall -n <cluster_name> <cluster_name> Resolve the errors and reset the cluster: Remove the cluster's managed cluster resource: USD oc delete managedcluster <cluster_name> Remove the cluster's namespace: USD oc delete namespace <cluster_name> This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the ManagedCluster CR deletion to complete before proceeding. Recreate the custom resources for the managed cluster. 22.6.7. RHACM generated cluster installation CRs reference Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using SiteConfig CRs for each site. Note Every managed cluster has its own namespace, and all of the installation CRs except for ManagedCluster and ClusterImageSet are under that namespace. ManagedCluster and ClusterImageSet are cluster-scoped, not namespace-scoped. The namespace and the CR names match the cluster name. The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the SiteConfig CRs that you configure. Table 22.8. Cluster installation CRs generated by RHACM CR Description Usage BareMetalHost Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. InfraEnv Contains information for installing OpenShift Container Platform on the target bare-metal host. Used with ClusterDeployment to generate the discovery ISO for the managed cluster. AgentClusterInstall Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster kubeconfig and credentials when the installation is complete. Specifies the managed cluster configuration information and provides status during the installation of the cluster. ClusterDeployment References the AgentClusterInstall CR to use. Used with InfraEnv to generate the discovery ISO for the managed cluster. NMStateConfig Provides network configuration information such as MAC address to IP mapping, DNS server, default route, and other network settings. Sets up a static IP address for the managed cluster's Kube API server. Agent Contains hardware information about the target bare-metal host. Created automatically on the hub when the target machine's discovery image boots. ManagedCluster When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. The hub uses this resource to manage and show the status of managed clusters. KlusterletAddonConfig Contains the list of services provided by the hub to be deployed to the ManagedCluster resource. Tells the hub which addon services to deploy to the ManagedCluster resource. Namespace Logical space for ManagedCluster resources existing on the hub. Unique per site. Propagates resources to the ManagedCluster . Secret Two CRs are created: BMC Secret and Image Pull Secret . BMC Secret authenticates into the target bare-metal host using its username and password. Image Pull Secret contains authentication information for the OpenShift Container Platform image installed on the target bare-metal host. ClusterImageSet Contains OpenShift Container Platform image information such as the repository and image name. Passed into resources to provide OpenShift Container Platform images. 22.7. Recommended single-node OpenShift cluster configuration for vDU application workloads Use the following reference information to understand the single-node OpenShift configurations required to deploy virtual distributed unit (vDU) applications in the cluster. Configurations include cluster optimizations for high performance workloads, enabling workload partitioning, and minimizing the number of reboots required postinstallation. Additional resources To deploy a single cluster by hand, see Manually installing a single-node OpenShift cluster with GitOps ZTP . To deploy a fleet of clusters using GitOps Zero Touch Provisioning (ZTP), see Deploying far edge sites with GitOps ZTP . 22.7.1. Running low latency applications on OpenShift Container Platform OpenShift Container Platform enables low latency processing for applications running on commercial off-the-shelf (COTS) hardware by using several technologies and specialized hardware devices: Real-time kernel for RHCOS Ensures workloads are handled with a high degree of process determinism. CPU isolation Avoids CPU scheduling delays and ensures CPU capacity is available consistently. NUMA-aware topology management Aligns memory and huge pages with CPU and PCI devices to pin guaranteed container memory and huge pages to the non-uniform memory access (NUMA) node. Pod resources for all Quality of Service (QoS) classes stay on the same NUMA node. This decreases latency and improves performance of the node. Huge pages memory management Using huge page sizes improves system performance by reducing the amount of system resources required to access page tables. Precision timing synchronization using PTP Allows synchronization between nodes in the network with sub-microsecond accuracy. 22.7.2. Recommended cluster host requirements for vDU application workloads Running vDU application workloads requires a bare-metal host with sufficient resources to run OpenShift Container Platform services and production workloads. Table 22.9. Minimum resource requirements Profile vCPU Memory Storage Minimum 4 to 8 vCPU 32GB of RAM 120GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Important The server must have a Baseboard Management Controller (BMC) when booting with virtual media. 22.7.3. Configuring host firmware for low latency and high performance Bare-metal hosts require the firmware to be configured before the host can be provisioned. The firmware configuration is dependent on the specific hardware and the particular requirements of your installation. Procedure Set the UEFI/BIOS Boot Mode to UEFI . In the host boot sequence order, set Hard drive first . Apply the specific firmware configuration for your hardware. The following table describes a representative firmware configuration for an Intel Xeon Skylake or Intel Cascade Lake server, based on the Intel FlexRAN 4G and 5G baseband PHY reference design. Important The exact firmware configuration depends on your specific hardware and network requirements. The following sample configuration is for illustrative purposes only. Table 22.10. Sample firmware configuration for an Intel Xeon Skylake or Cascade Lake server Firmware setting Configuration CPU Power and Performance Policy Performance Uncore Frequency Scaling Disabled Performance P-limit Disabled Enhanced Intel SpeedStep (R) Tech Enabled Intel Configurable TDP Enabled Configurable TDP Level Level 2 Intel(R) Turbo Boost Technology Enabled Energy Efficient Turbo Disabled Hardware P-States Disabled Package C-State C0/C1 state C1E Disabled Processor C6 Disabled Note Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments. 22.7.4. Connectivity prerequisites for managed cluster networks Before you can install and provision a managed cluster with the GitOps Zero Touch Provisioning (ZTP) pipeline, the managed cluster host must meet the following networking prerequisites: There must be bi-directional connectivity between the GitOps ZTP container in the hub cluster and the Baseboard Management Controller (BMC) of the target bare-metal host. The managed cluster must be able to resolve and reach the API hostname of the hub hostname and *.apps hostname. Here is an example of the API hostname of the hub and *.apps hostname: api.hub-cluster.internal.domain.com console-openshift-console.apps.hub-cluster.internal.domain.com The hub cluster must be able to resolve and reach the API and *.apps hostname of the managed cluster. Here is an example of the API hostname of the managed cluster and *.apps hostname: api.sno-managed-cluster-1.internal.domain.com console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com 22.7.5. Workload partitioning in single-node OpenShift with GitOps ZTP Workload partitioning configures OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved number of host CPUs. To configure workload partitioning with GitOps Zero Touch Provisioning (ZTP), you configure a cpuPartitioningMode field in the SiteConfig custom resource (CR) that you use to install the cluster and you apply a PerformanceProfile CR that configures the isolated and reserved CPUs on the host. Configuring the SiteConfig CR enables workload partitioning at cluster installation time and applying the PerformanceProfile CR configures the specific allocation of CPUs to reserved and isolated sets. Both of these steps happen at different points during cluster provisioning. Note Configuring workload partitioning by using the cpuPartitioningMode field in the SiteConfig CR is a Tech Preview feature in OpenShift Container Platform 4.13. Alternatively, you can specify cluster management CPU resources with the cpuset field of the SiteConfig custom resource (CR) and the reserved field of the group PolicyGenTemplate CR. The GitOps ZTP pipeline uses these values to populate the required fields in the workload partitioning MachineConfig CR ( cpuset ) and the PerformanceProfile CR ( reserved ) that configure the single-node OpenShift cluster. This method is a General Availability feature in OpenShift Container Platform 4.14. The workload partitioning configuration pins the OpenShift Container Platform infrastructure pods to the reserved CPU set. Platform services such as systemd, CRI-O, and kubelet run on the reserved CPU set. The isolated CPU sets are exclusively allocated to your container workloads. Isolating CPUs ensures that the workload has guaranteed access to the specified CPUs without contention from other applications running on the same node. All CPUs that are not isolated should be reserved. Important Ensure that reserved and isolated CPU sets do not overlap with each other. Additional resources For the recommended single-node OpenShift workload partitioning configuration, see Workload partitioning . 22.7.6. Recommended cluster install manifests The ZTP pipeline applies the following custom resources (CRs) during cluster installation. These configuration CRs ensure that the cluster meets the feature and performance requirements necessary for running a vDU application. Note When using the GitOps ZTP plugin and SiteConfig CRs for cluster deployment, the following MachineConfig CRs are included by default. Use the SiteConfig extraManifests filter to alter the CRs that are included by default. For more information, see Advanced managed cluster configuration with SiteConfig CRs . 22.7.6.1. Workload partitioning Single-node OpenShift clusters that run DU workloads require workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. Note Workload partitioning can be enabled during cluster installation only. You cannot disable workload partitioning postinstallation. You can however change the set of CPUs assigned to the isolated and reserved sets through the PerformanceProfile CR. Changes to CPU settings cause the node to reboot. Upgrading from OpenShift Container Platform 4.12 to 4.13+ When transitioning to using cpuPartitioningMode for enabling workload partitioning, remove the workload partitioning MachineConfig CRs from the /extra-manifest folder that you use to provision the cluster. Recommended SiteConfig CR configuration for workload partitioning apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "<site_name>" namespace: "<site_name>" spec: baseDomain: "example.com" cpuPartitioningMode: AllNodes 1 1 Set the cpuPartitioningMode field to AllNodes to configure workload partitioning for all nodes in the cluster. Verification Check that the applications and cluster system CPU pinning is correct. Run the following commands: Open a remote shell prompt to the managed cluster: USD oc debug node/example-sno-1 Check that the OpenShift infrastructure applications CPU pinning is correct: sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done Example output pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53 Check that the system applications CPU pinning is correct: sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done Example output pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53 22.7.6.2. Reduced platform management footprint To reduce the overall management footprint of the platform, a MachineConfig custom resource (CR) is required that places all Kubernetes-specific mount points in a new namespace separate from the host operating system. The following base64-encoded example MachineConfig CR illustrates this configuration. Recommended container mount namespace configuration ( 01-container-mount-ns-and-kubelet-conf-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 22.7.6.3. SCTP Stream Control Transmission Protocol (SCTP) is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol. Recommended control plane node SCTP configuration ( 03-sctp-machine-config-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf Recommended worker node SCTP configuration ( 03-sctp-machine-config-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 22.7.6.4. Setting rcu_normal The following MachineConfig CR configures the system to set rcu_normal to 1 after the system has finished startup. This improves kernel latency for vDU applications. Recommended configuration for disabling rcu_expedited after the node has finished startup ( 08-set-rcu-normal-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service 22.7.6.5. Automatic kernel crash dumps with kdump kdump is a Linux kernel feature that creates a kernel crash dump when the kernel crashes. kdump is enabled with the following MachineConfig CRs. Recommended MachineConfig CR to remove ice driver from control plane kdump logs ( 05-kdump-config-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh Recommended control plane node kdump configuration ( 06-kdump-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M Recommended MachineConfig CR to remove ice driver from worker node kdump logs ( 05-kdump-config-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh Recommended kdump worker node configuration ( 06-kdump-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 22.7.6.6. Disable automatic CRI-O cache wipe After an uncontrolled host shutdown or cluster reboot, CRI-O automatically deletes the entire CRI-O cache, causing all images to be pulled from the registry when the node reboots. This can result in unacceptably slow recovery times or recovery failures. To prevent this from happening in single-node OpenShift clusters that you install with GitOps ZTP, disable the CRI-O delete cache feature during cluster installation. Recommended MachineConfig CR to disable CRI-O cache wipe on control plane nodes ( 99-crio-disable-wipe-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml Recommended MachineConfig CR to disable CRI-O cache wipe on worker nodes ( 99-crio-disable-wipe-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 22.7.6.7. Configuring crun as the default container runtime The following ContainerRuntimeConfig custom resources (CRs) configure crun as the default OCI container runtime for control plane and worker nodes. The crun container runtime is fast and lightweight and has a low memory footprint. Important For optimal performance, enable crun for control plane and worker nodes in single-node OpenShift, three-node OpenShift, and standard clusters. To avoid the cluster rebooting when the CR is applied, apply the change as a GitOps ZTP additional Day 0 install-time manifest. Recommended ContainerRuntimeConfig CR for control plane nodes ( enable-crun-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun Recommended ContainerRuntimeConfig CR for worker nodes ( enable-crun-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun 22.7.7. Recommended postinstallation cluster configurations When the cluster installation is complete, the ZTP pipeline applies the following custom resources (CRs) that are required to run DU workloads. Note In GitOps ZTP v4.10 and earlier, you configure UEFI secure boot with a MachineConfig CR. This is no longer required in GitOps ZTP v4.11 and later. In v4.11, you configure UEFI secure boot for single-node OpenShift clusters by updating the spec.clusters.nodes.bootMode field in the SiteConfig CR that you use to install the cluster. For more information, see Deploying a managed cluster with SiteConfig and GitOps ZTP . 22.7.7.1. Operators Single-node OpenShift clusters that run DU workloads require the following Operators to be installed: Local Storage Operator Logging Operator PTP Operator SR-IOV Network Operator You also need to configure a custom CatalogSource CR, disable the default OperatorHub configuration, and configure an ImageContentSourcePolicy mirror registry that is accessible from the clusters that you install. Recommended Storage Operator namespace and Operator group configuration ( StorageNS.yaml , StorageOperGroup.yaml ) --- apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage Recommended Cluster Logging Operator namespace and Operator group configuration ( ClusterLogNS.yaml , ClusterLogOperGroup.yaml ) --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging Recommended PTP Operator namespace and Operator group configuration ( PtpSubscriptionNS.yaml , PtpSubscriptionOperGroup.yaml ) --- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp Recommended SR-IOV Operator namespace and Operator group configuration ( SriovSubscriptionNS.yaml , SriovSubscriptionOperGroup.yaml ) --- apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator Recommended CatalogSource configuration ( DefaultCatsrc.yaml ) apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY Recommended ImageContentSourcePolicy configuration ( DisconnectedICSP.yaml ) apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors Recommended OperatorHub configuration ( OperatorHub.yaml ) apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true 22.7.7.2. Operator subscriptions Single-node OpenShift clusters that run DU workloads require the following Subscription CRs. The subscription provides the location to download the following Operators: Local Storage Operator Logging Operator PTP Operator SR-IOV Network Operator SRIOV-FEC Operator For each Operator subscription, specify the channel to get the Operator from. The recommended channel is stable . You can specify Manual or Automatic updates. In Automatic mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. In Manual mode, new Operator versions are installed only when they are explicitly approved. Tip Use Manual mode for subscriptions. This allows you to control the timing of Operator updates to fit within scheduled maintenance windows. Recommended Local Storage Operator subscription ( StorageSubscription.yaml ) apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: "stable" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Recommended SR-IOV Operator subscription ( SriovSubscription.yaml ) apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Recommended PTP Operator subscription ( PtpSubscription.yaml ) --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: "stable" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Recommended Cluster Logging Operator subscription ( ClusterLogSubscription.yaml ) apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: "stable" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 22.7.7.3. Cluster logging and log forwarding Single-node OpenShift clusters that run DU workloads require logging and log forwarding for debugging. The following ClusterLogging and ClusterLogForwarder custom resources (CRs) are required. Recommended cluster logging configuration ( ClusterLogging.yaml ) apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: "Managed" collection: logs: type: "vector" Recommended log forwarding configuration ( ClusterLogForwarder.yaml ) apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines Set the spec.outputs.url field to the URL of the Kafka server where the logs are forwarded to. 22.7.7.4. Performance profile Single-node OpenShift clusters that run DU workloads require a Node Tuning Operator performance profile to use real-time host capabilities and services. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. The following example PerformanceProfile CR illustrates the required single-node OpenShift cluster configuration. Recommended performance profile configuration ( PerformanceProfile.yaml ) apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: "" numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false Table 22.11. PerformanceProfile CR options for single-node OpenShift clusters PerformanceProfile CR field Description metadata.name Ensure that name matches the following fields set in related GitOps ZTP custom resources (CRs): include=openshift-node-performance-USD{PerformanceProfile.metadata.name} in TunedPerformancePatch.yaml name: 50-performance-USD{PerformanceProfile.metadata.name} in validatorCRs/informDuValidator.yaml spec.additionalKernelArgs "efi=runtime" Configures UEFI secure boot for the cluster host. spec.cpu.isolated Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. Important The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system. spec.cpu.reserved Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. spec.hugepages.pages Set the number of huge pages ( count ) Set the huge pages size ( size ). Set node to the NUMA node where the hugepages are allocated ( node ) spec.realTimeKernel Set enabled to true to use the realtime kernel. spec.workloadHints Use workloadHints to define the set of top level flags for different type of workloads. The example configuration configures the cluster for low latency and high performance. 22.7.7.5. Configuring cluster time synchronization Run a one-time system time synchronization job for control plane or worker nodes. Recommended one time time-sync for control plane nodes ( 99-sync-time-once-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service Recommended one time time-sync for worker nodes ( 99-sync-time-once-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 22.7.7.6. PTP Single-node OpenShift clusters use Precision Time Protocol (PTP) for network time synchronization. The following example PtpConfig CRs illustrate the required PTP configurations for ordinary clocks, boundary clocks, and grandmaster clocks. The exact configuration you apply will depend on the node hardware and specific use case. Recommended PTP ordinary clock configuration ( PtpConfigSlave.yaml ) apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: "ordinary" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "ordinary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Recommended boundary clock configuration ( PtpConfigBoundary.yaml ) apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary" ptp4lOpts: "-2" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "boundary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Recommended PTP Westport Channel e810 grandmaster clock configuration ( PtpConfigGmWpc.yaml ) # The grandmaster profile is provided for testing only # It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_master": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "0 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,300" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #GNSS module s /dev/ttyGNSS* -al use _0 #cat /dev/ttyGNSS_1700_0 to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" The following optional PtpOperatorConfig CR configures PTP events reporting for the node. Recommended PTP events configuration ( PtpOperatorConfigForEvent.yaml ) apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: "" ptpEventConfig: enableEventPublisher: true transportHost: "http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" 22.7.7.7. Extended Tuned profile Single-node OpenShift clusters that run DU workloads require additional performance tuning configurations necessary for high-performance workloads. The following example Tuned CR extends the Tuned profile: Recommended extended Tuned profile configuration ( TunedPerformancePatch.yaml ) apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "USDmcp" priority: 19 profile: performance-patch Table 22.12. Tuned CR options for single-node OpenShift clusters Tuned CR field Description spec.profile.data The include line that you set in spec.profile.data must match the associated PerformanceProfile CR name. For example, include=openshift-node-performance-USD{PerformanceProfile.metadata.name} . When using the non-realtime kernel, remove the timer_migration override line from the [sysctl] section. 22.7.7.8. SR-IOV Single root I/O virtualization (SR-IOV) is commonly used to enable fronthaul and midhaul networks. The following YAML example configures SR-IOV for a single-node OpenShift cluster. Note The configuration of the SriovNetwork CR will vary depending on your specific network and infrastructure requirements. Recommended SriovOperatorConfig CR configuration ( SriovOperatorConfig.yaml ) apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" # Injector and OperatorWebhook pods can be disabled (set to "false") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the "requests"/"limits" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: "1" # requests: # openshift.io/<resource_name>: "1" enableInjector: true enableOperatorWebhook: true logLevel: 0 Table 22.13. SriovOperatorConfig CR options for single-node OpenShift clusters SriovOperatorConfig CR field Description spec.enableInjector Disable Injector pods to reduce the number of management pods. Start with the Injector pods enabled, and only disable them after verifying the user manifests. If the injector is disabled, containers that use SR-IOV resources must explicitly assign them in the requests and limits section of the container spec. For example: containers: - name: my-sriov-workload-container resources: limits: openshift.io/<resource_name>: "1" requests: openshift.io/<resource_name>: "1" spec.enableOperatorWebhook Disable OperatorWebhook pods to reduce the number of management pods. Start with the OperatorWebhook pods enabled, and only disable them after verifying the user manifests. Recommended SriovNetwork configuration ( SriovNetwork.yaml ) apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: "" networkNamespace: openshift-sriov-network-operator # vlan: "" # spoofChk: "" # ipam: "" # linkState: "" # maxTxRate: "" # minTxRate: "" # vlanQoS: "" # trust: "" # capabilities: "" Table 22.14. SriovNetwork CR options for single-node OpenShift clusters SriovNetwork CR field Description spec.vlan Configure vlan with the VLAN for the midhaul network. Recommended SriovNetworkNodePolicy CR configuration ( SriovNetworkNodePolicy.yaml ) apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: "" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName Table 22.15. SriovNetworkPolicy CR options for single-node OpenShift clusters SriovNetworkNodePolicy CR field Description spec.deviceType Configure deviceType as vfio-pci or netdevice . For Mellanox NICs, set deviceType: netdevice , and isRdma: true . For Intel based NICs, set deviceType: vfio-pci and isRdma: false . spec.nicSelector.pfNames Specifies the interface connected to the fronthaul network. spec.numVfs Specifies the number of VFs for the fronthaul network. spec.nicSelector.pfNames The exact name of physical function must match the hardware. Recommended SR-IOV kernel configurations ( 07-sriov-related-kernel-args-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt 22.7.7.9. Console Operator Use the cluster capabilities feature to prevent the Console Operator from being installed. When the node is centrally managed it is not needed. Removing the Operator provides additional space and capacity for application workloads. To disable the Console Operator during the installation of the managed cluster, set the following in the spec.clusters.0.installConfigOverrides field of the SiteConfig custom resource (CR): installConfigOverrides: "{\"capabilities\":{\"baselineCapabilitySet\": \"None\" }}" 22.7.7.10. Alertmanager Single-node OpenShift clusters that run DU workloads require reduced CPU resources consumed by the OpenShift Container Platform monitoring components. The following ConfigMap custom resource (CR) disables Alertmanager. Recommended cluster monitoring configuration ( ReduceMonitoringFootprint.yaml ) apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h 22.7.7.11. Operator Lifecycle Manager Single-node OpenShift clusters that run distributed unit workloads require consistent access to CPU resources. Operator Lifecycle Manager (OLM) collects performance data from Operators at regular intervals, resulting in an increase in CPU utilisation. The following ConfigMap custom resource (CR) disables the collection of Operator performance data by OLM. Recommended cluster OLM configuration ( ReduceOLMFootprint.yaml ) apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True 22.7.7.12. LVM Storage You can dynamically provision local storage on single-node OpenShift clusters with Logical Volume Manager (LVM) Storage. Note The recommended storage solution for single-node OpenShift is the Local Storage Operator. Alternatively, you can use LVM Storage but it requires additional CPU resources to be allocated. The following YAML example configures the storage of the node to be available to OpenShift Container Platform applications. Recommended LVMCluster configuration ( StorageLVMCluster.yaml ) apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: odf-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /usr/disk/by-path/pci-0000:11:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90 Table 22.16. LVMCluster CR options for single-node OpenShift clusters LVMCluster CR field Description deviceSelector.paths Configure the disks used for LVM storage. If no disks are specified, the LVM Storage uses all the unused disks in the specified thin pool. 22.7.7.13. Network diagnostics Single-node OpenShift clusters that run DU workloads require less inter-pod network connectivity checks to reduce the additional load created by these pods. The following custom resource (CR) disables these checks. Recommended network diagnostics configuration ( DisableSnoNetworkDiag.yaml ) apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true Additional resources Deploying far edge sites using ZTP 22.8. Validating single-node OpenShift cluster tuning for vDU application workloads Before you can deploy virtual distributed unit (vDU) applications, you need to tune and configure the cluster host firmware and various other cluster configuration settings. Use the following information to validate the cluster configuration to support vDU workloads. Additional resources Workload partitioning in single-node OpenShift with GitOps ZTP Reference configuration for deploying vDUs on single-node OpenShift 22.8.1. Recommended firmware configuration for vDU cluster hosts Use the following table as the basis to configure the cluster host firmware for vDU applications running on OpenShift Container Platform 4.14. Note The following table is a general recommendation for vDU cluster host firmware configuration. Exact firmware settings will depend on your requirements and specific hardware platform. Automatic setting of firmware is not handled by the zero touch provisioning pipeline. Table 22.17. Recommended cluster host firmware settings Firmware setting Configuration Description HyperTransport (HT) Enabled HyperTransport (HT) bus is a bus technology developed by AMD. HT provides a high-speed link between the components in the host memory and other system peripherals. UEFI Enabled Enable booting from UEFI for the vDU host. CPU Power and Performance Policy Performance Set CPU Power and Performance Policy to optimize the system for performance over energy efficiency. Uncore Frequency Scaling Disabled Disable Uncore Frequency Scaling to prevent the voltage and frequency of non-core parts of the CPU from being set independently. Uncore Frequency Maximum Sets the non-core parts of the CPU such as cache and memory controller to their maximum possible frequency of operation. Performance P-limit Disabled Disable Performance P-limit to prevent the Uncore frequency coordination of processors. Enhanced Intel(R) SpeedStep Tech Enabled Enable Enhanced Intel SpeedStep to allow the system to dynamically adjust processor voltage and core frequency that decreases power consumption and heat production in the host. Intel(R) Turbo Boost Technology Enabled Enable Turbo Boost Technology for Intel-based CPUs to automatically allow processor cores to run faster than the rated operating frequency if they are operating below power, current, and temperature specification limits. Intel Configurable TDP Enabled Enables Thermal Design Power (TDP) for the CPU. Configurable TDP Level Level 2 TDP level sets the CPU power consumption required for a particular performance rating. TDP level 2 sets the CPU to the most stable performance level at the cost of power consumption. Energy Efficient Turbo Disabled Disable Energy Efficient Turbo to prevent the processor from using an energy-efficiency based policy. Hardware P-States Enabled or Disabled Enable OS-controlled P-States to allow power saving configurations. Disable P-states (performance states) to optimize the operating system and CPU for performance over power consumption. Package C-State C0/C1 state Use C0 or C1 states to set the processor to a fully active state (C0) or to stop CPU internal clocks running in software (C1). C1E Disabled CPU Enhanced Halt (C1E) is a power saving feature in Intel chips. Disabling C1E prevents the operating system from sending a halt command to the CPU when inactive. Processor C6 Disabled C6 power-saving is a CPU feature that automatically disables idle CPU cores and cache. Disabling C6 improves system performance. Sub-NUMA Clustering Disabled Sub-NUMA clustering divides the processor cores, cache, and memory into multiple NUMA domains. Disabling this option can increase performance for latency-sensitive workloads. Note Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments. Note Enable both C-states and OS-controlled P-States to allow per pod power management. 22.8.2. Recommended cluster configurations to run vDU applications Clusters running virtualized distributed unit (vDU) applications require a highly tuned and optimized configuration. The following information describes the various elements that you require to support vDU workloads in OpenShift Container Platform 4.14 clusters. 22.8.2.1. Recommended cluster MachineConfig CRs for single-node OpenShift clusters Check that the MachineConfig custom resources (CRs) that you extract from the ztp-site-generate container are applied in the cluster. The CRs can be found in the extracted out/source-crs/extra-manifest/ folder. The following MachineConfig CRs from the ztp-site-generate container configure the cluster host: Table 22.18. Recommended GitOps ZTP MachineConfig CRs MachineConfig CR Description 01-container-mount-ns-and-kubelet-conf-master.yaml 01-container-mount-ns-and-kubelet-conf-worker.yaml Configures the container mount namespace and kubelet configuration. 03-sctp-machine-config-master.yaml 03-sctp-machine-config-worker.yaml Loads the SCTP kernel module. These MachineConfig CRs are optional and can be omitted if you do not require this kernel module. 05-kdump-config-master.yaml 05-kdump-config-worker.yaml 06-kdump-master.yaml 06-kdump-worker.yaml Configures kdump crash reporting for the cluster. 07-sriov-related-kernel-args-master.yaml Configures SR-IOV kernel arguments in the cluster. 08-set-rcu-normal-master.yaml 08-set-rcu-normal-worker.yaml Disables rcu_expedited mode after the cluster has rebooted. 99-crio-disable-wipe-master.yaml 99-crio-disable-wipe-worker.yaml Disables the automatic CRI-O cache wipe following cluster reboot. 99-sync-time-once-master.yaml 99-sync-time-once-worker.yaml Configures the one-time check and adjustment of the system clock by the Chrony service. enable-crun-master.yaml enable-crun-worker.yaml Enables the crun OCI container runtime. extra-manifest/enable-cgroups-v1.yaml source-crs/extra-manifest/enable-cgroups-v1.yaml Enables cgroups v1 during cluster installation and when generating RHACM cluster policies. Note In OpenShift Container Platform 4.14 and later, you configure workload partitioning with the cpuPartitioningMode field in the SiteConfig CR. Additional resources Workload partitioning in single-node OpenShift with GitOps ZTP Extracting source CRs from the ztp-site-generate container 22.8.2.2. Recommended cluster Operators The following Operators are required for clusters running virtualized distributed unit (vDU) applications and are a part of the baseline reference configuration: Node Tuning Operator (NTO). NTO packages functionality that was previously delivered with the Performance Addon Operator, which is now a part of NTO. PTP Operator SR-IOV Network Operator Red Hat OpenShift Logging Operator Local Storage Operator 22.8.2.3. Recommended cluster kernel configuration Always use the latest supported real-time kernel version in your cluster. Ensure that you apply the following configurations in the cluster: Ensure that the following additionalKernelArgs are set in the cluster performance profile: spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "module_blacklist=irdma" Ensure that the performance-patch profile in the Tuned CR configures the correct CPU isolation set that matches the isolated CPU set in the related PerformanceProfile CR, for example: spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name, for example: # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from the [sysctl] section data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable 22.8.2.4. Checking the realtime kernel version Always use the latest version of the realtime kernel in your OpenShift Container Platform clusters. If you are unsure about the kernel version that is in use in the cluster, you can compare the current realtime kernel version to the release version with the following procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. You have installed podman . Procedure Run the following command to get the cluster version: USD OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') Get the release image SHA number: USD DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64) Run the release image container and extract the kernel version that is packaged with cluster's current release: USD podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##' Example output 4.18.0-305.49.1.rt7.121.el8_4.x86_64 This is the default realtime kernel version that ships with the release. Note The realtime kernel is denoted by the string .rt in the kernel version. Verification Check that the kernel version listed for the cluster's current release matches actual realtime kernel that is running in the cluster. Run the following commands to check the running realtime kernel version: Open a remote shell connection to the cluster node: USD oc debug node/<node_name> Check the realtime kernel version: sh-4.4# uname -r Example output 4.18.0-305.49.1.rt7.121.el8_4.x86_64 22.8.3. Checking that the recommended cluster configurations are applied You can check that clusters are running the correct configuration. The following procedure describes how to check the various configurations that you require to deploy a DU application in OpenShift Container Platform 4.14 clusters. Prerequisites You have deployed a cluster and tuned it for vDU workloads. You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Check that the default OperatorHub sources are disabled. Run the following command: USD oc get operatorhub cluster -o yaml Example output spec: disableAllDefaultSources: true Check that all required CatalogSource resources are annotated for workload partitioning ( PreferredDuringScheduling ) by running the following command: USD oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.target\.workload\.openshift\.io/management}{"\n"}{end}' Example output certified-operators -- {"effect": "PreferredDuringScheduling"} community-operators -- {"effect": "PreferredDuringScheduling"} ran-operators 1 redhat-marketplace -- {"effect": "PreferredDuringScheduling"} redhat-operators -- {"effect": "PreferredDuringScheduling"} 1 CatalogSource resources that are not annotated are also returned. In this example, the ran-operators CatalogSource resource is not annotated and does not have the PreferredDuringScheduling annotation. Note In a properly configured vDU cluster, only a single annotated catalog source is listed. Check that all applicable OpenShift Container Platform Operator namespaces are annotated for workload partitioning. This includes all Operators installed with core OpenShift Container Platform and the set of additional Operators included in the reference DU tuning configuration. Run the following command: USD oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.workload\.openshift\.io/allowed}{"\n"}{end}' Example output default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management Important Additional Operators must not be annotated for workload partitioning. In the output from the command, additional Operators should be listed without any value on the right side of the -- separator. Check that the ClusterLogging configuration is correct. Run the following commands: Validate that the appropriate input and output logs are configured: USD oc get -n openshift-logging ClusterLogForwarder instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: "2022-07-19T21:51:41Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "1030342" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open ... Check that the curation schedule is appropriate for your application: USD oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: "2022-07-07T18:22:56Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "235796" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed ... Check that the web console is disabled ( managementState: Removed ) by running the following command: USD oc get consoles.operator.openshift.io cluster -o jsonpath="{ .spec.managementState }" Example output Removed Check that chronyd is disabled on the cluster node by running the following commands: USD oc debug node/<node_name> Check the status of chronyd on the node: sh-4.4# chroot /host sh-4.4# systemctl status chronyd Example output ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5) Check that the PTP interface is successfully synchronized to the primary clock using a remote shell connection to the linuxptp-daemon container and the PTP Management Client ( pmc ) tool: Set the USDPTP_POD_NAME variable with the name of the linuxptp-daemon pod by running the following command: USD PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name) Run the following command to check the sync status of the PTP device: USD oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' Example output sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 Run the following pmc command to check the PTP clock status: USD oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP' Example output sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00 1 master_offset should be between -100 and 100 ns. 2 Indicates that the PTP clock is synchronized to a master, and the local clock is not the grandmaster clock. Check that the expected master offset value corresponding to the value in /var/run/ptp4l.0.config is found in the linuxptp-daemon-container log: USD oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container Example output phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533 Check that the SR-IOV configuration is correct by running the following commands: Check that the disableDrain value in the SriovOperatorConfig resource is set to true : USD oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath="{.spec.disableDrain}{'\n'}" Example output true Check that the SriovNetworkNodeState sync status is Succeeded by running the following command: USD oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath="{.items[*].status.syncStatus}{'\n'}" Example output Succeeded Verify that the expected number and configuration of virtual functions ( Vfs ) under each interface configured for SR-IOV is present and correct in the .status.interfaces field. For example: USD oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml Example output apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState ... status: interfaces: ... - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: "8086" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: "8086" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: "8086" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: "8086" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: "8086" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: "8086" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: "8086" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: "8086" vfID: 7 Check that the cluster performance profile is correct. The cpu and hugepages sections will vary depending on your hardware configuration. Run the following command: USD oc get PerformanceProfile openshift-node-performance-profile -o yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: "2022-07-19T21:51:31Z" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: "33558" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Available - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Upgradeable - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Progressing - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile Note CPU settings are dependent on the number of cores available on the server and should align with workload partitioning settings. hugepages configuration is server and application dependent. Check that the PerformanceProfile was successfully applied to the cluster by running the following command: USD oc get performanceprofile openshift-node-performance-profile -o jsonpath="{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\n'}{end}" Example output Available -- True Upgradeable -- True Progressing -- False Degraded -- False Check the Tuned performance patch settings by running the following command: USD oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml Example output apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: "2022-07-18T10:33:52Z" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: "34024" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch 1 The cpu list in cmdline=nohz_full= will vary based on your hardware configuration. Check that cluster networking diagnostics are disabled by running the following command: USD oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}' Example output true Check that the Kubelet housekeeping interval is tuned to slower rate. This is set in the containerMountNS machine config. Run the following command: USD oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION Example output Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Check that Grafana and alertManagerMain are disabled and that the Prometheus retention period is set to 24h by running the following command: USD oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath="{ .data.config\.yaml }" Example output grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h Use the following commands to verify that Grafana and alertManagerMain routes are not found in the cluster: USD oc get route -n openshift-monitoring alertmanager-main USD oc get route -n openshift-monitoring grafana Both queries should return Error from server (NotFound) messages. Check that there is a minimum of 4 CPUs allocated as reserved for each of the PerformanceProfile , Tuned performance-patch, workload partitioning, and kernel command line arguments by running the following command: USD oc get performanceprofile -o jsonpath="{ .items[0].spec.cpu.reserved }" Example output 0-3 Note Depending on your workload requirements, you might require additional reserved CPUs to be allocated. 22.9. Advanced managed cluster configuration with SiteConfig resources You can use SiteConfig custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time. 22.9.1. Customizing extra installation manifests in the GitOps ZTP pipeline You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the SiteConfig custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig CRs at install time makes the installation process more efficient. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs. In your custom /siteconfig directory, create a subdirectory /custom-manifest for your extra manifests. The following example illustrates a sample /siteconfig with /custom-manifest folder: siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml ├── extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml Note The subdirectory names /custom-manifest and /extra-manifest used throughout are example names only. There is no requirement to use these names and no restriction on how you name these subdirectories. In this example /extra-manifest refers to the Git subdirectory that stores the contents of /extra-manifest from the ztp-site-generate container. Add your custom extra manifest CRs to the siteconfig/custom-manifest directory. In your SiteConfig CR, enter the directory name in the extraManifests.searchPaths field, for example: clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2 1 Folder for manifests copied from the ztp-site-generate container. 2 Folder for custom manifests. Save the SiteConfig , /extra-manifest , and /custom-manifest CRs, and push them to the site configuration repo. During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the /custom-manifest directory to the default set of extra manifests stored in extra-manifest/ . Note As of version 4.14 extraManifestPath is subject to a deprecation warning. While extraManifestPath is still supported, we recommend that you use extraManifests.searchPaths . If you define extraManifests.searchPaths in the SiteConfig file, the GitOps ZTP pipeline does not fetch manifests from the ztp-site-generate container during site installation. If you define both extraManifestPath and extraManifests.searchPaths in the Siteconfig CR, the setting defined for extraManifests.searchPaths takes precedence. It is strongly recommended that you extract the contents of /extra-manifest from the ztp-site-generate container and push it to the GIT repository. 22.9.2. Filtering custom resources using SiteConfig filters By using filters, you can easily customize SiteConfig custom resources (CRs) to include or exclude other CRs for use in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. You can specify an inclusionDefault value of include or exclude for the SiteConfig CR, along with a list of the specific extraManifest RAN CRs that you want to include or exclude. Setting inclusionDefault to include makes the GitOps ZTP pipeline apply all the files in /source-crs/extra-manifest during installation. Setting inclusionDefault to exclude does the opposite. You can exclude individual CRs from the /source-crs/extra-manifest folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml CR at installation time. Some additional optional filtering scenarios are also described. Prerequisites You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure To prevent the GitOps ZTP pipeline from applying the 03-sctp-machine-config-worker.yaml CR file, apply the following YAML in the SiteConfig CR: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.14" sshPublicKey: "<ssh_public_key>" clusters: - clusterName: "site1-sno-du" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml The GitOps ZTP pipeline skips the 03-sctp-machine-config-worker.yaml CR during installation. All other CRs in /source-crs/extra-manifest are applied. Save the SiteConfig CR and push the changes to the site configuration repository. The GitOps ZTP pipeline monitors and adjusts what CRs it applies based on the SiteConfig filter instructions. Optional: To prevent the GitOps ZTP pipeline from applying all the /source-crs/extra-manifest CRs during cluster installation, apply the following YAML in the SiteConfig CR: - clusterName: "site1-sno-du" extraManifests: filter: inclusionDefault: exclude Optional: To exclude all the /source-crs/extra-manifest RAN CRs and instead include a custom CR file during installation, edit the custom SiteConfig CR to set the custom manifests folder and the include file, for example: clusters: - clusterName: "site1-sno-du" extraManifestPath: "<custom_manifest_folder>" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml 1 Replace <custom_manifest_folder> with the name of the folder that contains the custom installation CRs, for example, user-custom-manifest/ . 2 Set inclusionDefault to exclude to prevent the GitOps ZTP pipeline from applying the files in /source-crs/extra-manifest during installation. The following example illustrates the custom folder structure: siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml 22.9.3. Deleting a node by using the SiteConfig CR By using a SiteConfig custom resource (CR), you can delete and reprovision a node. This method is more efficient than manually deleting the node. Prerequisites You have configured the hub cluster to generate the required installation and policy CRs. You have created a Git repository in which you can manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as the source repository for the Argo CD application. Procedure Update the SiteConfig CR to include the bmac.agent-install.openshift.io/remove-agent-and-node-on-delete=true annotation and push the changes to the Git repository: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "cnfdf20" namespace: "cnfdf20" spec: clusters: nodes: - hostname: node6 role: "worker" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true # ... Verify that the BareMetalHost object is annotated by running the following command: oc get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]' Example output true Suppress the generation of the BareMetalHost CR by updating the SiteConfig CR to include the crSuppression.BareMetalHost annotation: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "cnfdf20" namespace: "cnfdf20" spec: clusters: - nodes: - hostName: node6 role: "worker" crSuppression: - BareMetalHost # ... Push the changes to the Git repository and wait for deprovisioning to start. The status of the BareMetalHost CR should change to deprovisioning . Wait for the BareMetalHost to finish deprovisioning, and be fully deleted. Verification Verify that the BareMetalHost and Agent CRs for the worker node have been deleted from the hub cluster by running the following commands: USD oc get bmh -n <cluster-ns> USD oc get agent -n <cluster-ns> Verify that the node record has been deleted from the spoke cluster by running the following command: USD oc get nodes Note If you are working with secrets, deleting a secret too early can cause an issue because ArgoCD needs the secret to complete resynchronization after deletion. Delete the secret only after the node cleanup, when the current ArgoCD synchronization is complete. steps To reprovision a node, delete the changes previously added to the SiteConfig , push the changes to the Git repository, and wait for the synchronization to complete. This regenerates the BareMetalHost CR of the worker node and triggers the re-install of the node. 22.10. Advanced managed cluster configuration with PolicyGenTemplate resources You can use PolicyGenTemplate CRs to deploy custom functionality in your managed clusters. 22.10.1. Deploying additional changes to clusters If you require cluster configuration changes outside of the base GitOps Zero Touch Provisioning (ZTP) pipeline configuration, there are three options: Apply the additional configuration after the GitOps ZTP pipeline is complete When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget. Add content to the GitOps ZTP library The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required. Create extra manifests for the cluster installation Extra manifests are applied during installation and make the installation process more efficient. Important Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OpenShift Container Platform. Additional resources Customizing extra installation manifests in the GitOps ZTP pipeline 22.10.2. Using PolicyGenTemplate CRs to override source CRs content PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate container. You can think of PolicyGenTemplate CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR. The following example procedure describes how to update fields in the generated PerformanceProfile CR for the reference configuration based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate based on your requirements. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. Procedure Review the baseline source CR for existing content. You can review the source CRs listed in the reference PolicyGenTemplate CRs by extracting them from the GitOps Zero Touch Provisioning (ZTP) container. Create an /out folder: USD mkdir -p ./out Extract the source CRs: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14.1 extract /home/ztp --tar | tar x -C ./out Review the baseline PerformanceProfile CR in ./out/source-crs/PerformanceProfile.yaml : apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true Note Any fields in the source CR which contain USD... are removed from the generated CR if they are not provided in the PolicyGenTemplate CR. Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file. The following example PolicyGenTemplate CR stanza supplies appropriate CPU specifications, sets the hugepages configuration, and adds a new field that sets globallyDisableIrqLoadBalancing to false. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: "2-19,22-39" reserved: "0-1,20-21" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application. Example output The GitOps ZTP application generates an RHACM policy that contains the generated PerformanceProfile CR. The contents of that CR are derived by merging the metadata and spec contents from the PerformanceProfile entry in the PolicyGenTemplate onto the source CR. The resulting CR has the following content: --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In the /source-crs folder that you extract from the ztp-site-generate container, the USD syntax is not used for template substitution as implied by the syntax. Rather, if the policyGen tool sees the USD prefix for a string and you do not specify a value for that field in the related PolicyGenTemplate CR, the field is omitted from the output CR entirely. An exception to this is the USDmcp variable in /source-crs YAML files that is substituted with the specified value for mcp from the PolicyGenTemplate CR. For example, in example/policygentemplates/group-du-standard-ranGen.yaml , the value for mcp is worker : spec: bindingRules: group-du-standard: "" mcp: "worker" The policyGen tool replace instances of USDmcp with worker in the output CRs. 22.10.3. Adding custom content to the GitOps ZTP pipeline Perform the following procedure to add new content to the GitOps ZTP pipeline. Procedure Create a subdirectory named source-crs in the directory that contains the kustomization.yaml file for the PolicyGenTemplate custom resource (CR). Add your user-provided CRs to the source-crs subdirectory, as shown in the following example: example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml 1 The source-crs subdirectory must be in the same directory as the kustomization.yaml file. Update the required PolicyGenTemplate CRs to include references to the content you added in the source-crs/custom-crs and source-crs/elasticsearch directories. For example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-dev" namespace: "ztp-clusters" spec: bindingRules: dev: "true" mcp: "master" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: "group-dev-cluster-log-ns" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: "group-dev-cluster-log-operator-group" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: "group-dev-cluster-log-sub" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: "group-dev-lso-ns" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: "group-dev-lso-operator-group" - fileName: StorageSubscription.yaml remediationAction: inform policyName: "group-dev-lso-sub" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: "group-dev-pao-ns" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: "group-dev-pao-cat-source" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: "group-dev-pao-sub" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: "group-dev-elasticsearch-ns" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: "group-dev-elasticsearch-operator-group" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: "group-dev-apiserver-config" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: "group-dev-disable-nic-lldp" 1 2 Set fileName to include the relative path to the file from the /source-crs parent directory. Commit the PolicyGenTemplate change in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application. Update the ClusterGroupUpgrade CR to include the changed PolicyGenTemplate and save it as cgu-test.yaml . The following example shows a generated cgu-test.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated ClusterGroupUpgrade CR by running the following command: USD oc apply -f cgu-test.yaml Verification Check that the updates have succeeded by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies 22.10.4. Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances. You can override the default policy evaluation intervals with PolicyGenTemplate custom resources (CRs). You configure duration settings that define how long a ConfigurationPolicy CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies. The GitOps Zero Touch Provisioning (ZTP) policy generator generates ConfigurationPolicy CR policies with pre-defined policy evaluation intervals. The default value for the noncompliant state is 10 seconds. The default value for the compliant state is 10 minutes. To disable the evaluation interval, set the value to never . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the evaluation interval for all policies in a PolicyGenTemplate CR, add evaluationInterval to the spec field, and then set the appropriate compliant and noncompliant values. For example: spec: evaluationInterval: compliant: 30m noncompliant: 20s To configure the evaluation interval for the spec.sourceFiles object in a PolicyGenTemplate CR, add evaluationInterval to the sourceFiles field, for example: spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: "sriov-sub-policy" evaluationInterval: compliant: never noncompliant: 10s Commit the PolicyGenTemplate CRs files in the Git repository and push your changes. Verification Check that the managed spoke cluster policies are monitored at the expected intervals. Log in as a user with cluster-admin privileges on the managed cluster. Get the pods that are running in the open-cluster-management-agent-addon namespace. Run the following command: USD oc get pods -n open-cluster-management-agent-addon Example output NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d Check the applied policies are being evaluated at the expected interval in the logs for the config-policy-controller pod: USD oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb Example output 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"} 22.10.5. Signalling GitOps ZTP cluster deployment completion with validator inform policies Create a validator inform policy that signals when the GitOps Zero Touch Provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters. Procedure Create a standalone PolicyGenTemplate custom resource (CR) that contains the source file validatorCRs/informDuValidator.yaml . You only need one standalone PolicyGenTemplate CR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters: Example single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml) apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno-validator" 1 namespace: "ztp-group" 2 spec: bindingRules: group-du-sno: "" 3 bindingExcludedRules: ztp-done: "" 4 mcp: "master" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: "du-policy" 7 1 The name of PolicyGenTemplates object. This name is also used as part of the names for the placementBinding , placementRule , and policy that are created in the requested namespace . 2 This value should match the namespace used in the group PolicyGenTemplates . 3 The group-du-* label defined in bindingRules must exist in the SiteConfig files. 4 The label defined in bindingExcludedRules must be`ztp-done:`. The ztp-done label is used in coordination with the Topology Aware Lifecycle Manager. 5 mcp defines the MachineConfigPool object that is used in the source file validatorCRs/informDuValidator.yaml . It should be master for single node and three-node cluster deployments and worker for standard cluster deployments. 6 Optional. The default value is inform . 7 This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is group-du-sno-validator-du-policy . Commit the PolicyGenTemplate CR file in your Git repository and push the changes. Additional resources Upgrading GitOps ZTP 22.10.6. Configuring power states using PolicyGenTemplates CRs For low latency and high-performance edge deployments, it is necessary to disable or limit C-states and P-states. With this configuration, the CPU runs at a constant frequency, which is typically the maximum turbo frequency. This ensures that the CPU is always running at its maximum speed, which results in high performance and low latency. This leads to the best latency for workloads. However, this also leads to the highest power consumption, which might not be necessary for all workloads. Workloads can be classified as critical or non-critical, with critical workloads requiring disabled C-state and P-state settings for high performance and low latency, while non-critical workloads use C-state and P-state settings for power savings at the expense of some latency and performance. You can configure the following three power states using GitOps Zero Touch Provisioning (ZTP): High-performance mode provides ultra low latency at the highest power consumption. Performance mode provides low latency at a relatively high power consumption. Power saving balances reduced power consumption with increased latency. The default configuration is for a low latency, performance mode. PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details onto the base source CRs provided with the GitOps plugin in the ztp-site-generate container. Configure the power states by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . The following common prerequisites apply to configuring all three power states. Prerequisites You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. You have followed the procedure described in "Preparing the GitOps ZTP site configuration repository". Additional resources Configuring node power consumption and realtime processing with workload hints 22.10.6.1. Configuring performance mode using PolicyGenTemplate CRs Follow this example to set performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . Performance mode provides low latency at a relatively high power consumption. Prerequisites You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance". Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to set performance mode. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 22.10.6.2. Configuring high-performance mode using PolicyGenTemplate CRs Follow this example to set high performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . High performance mode provides ultra low latency at the highest power consumption. Prerequisites You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance". Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to set high-performance mode. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 22.10.6.3. Configuring power saving mode using PolicyGenTemplate CRs Follow this example to set power saving mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . The power saving mode balances reduced power consumption with increased latency. Prerequisites You enabled C-states and OS-controlled P-states in the BIOS. Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to configure power saving mode. It is recommended to configure the CPU governor for the power saving mode through the additional kernel arguments object. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - "cpufreq.default_governor=schedutil" 1 1 The schedutil governor is recommended, however, other governors that can be used include ondemand and powersave . Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. Verification Select a worker node in your deployed cluster from the list of nodes identified by using the following command: USD oc get nodes Log in to the node by using the following command: USD oc debug node/<node-name> Replace <node-name> with the name of the node you want to verify the power state on. Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths as shown in the following example: # chroot /host Run the following command to verify the applied power state: # cat /proc/cmdline Expected output For power saving mode the intel_pstate=passive . Additional resources Configuring power saving for nodes that run colocated high and low priority workloads Configuring host firmware for low latency and high performance Preparing the GitOps ZTP site configuration repository 22.10.6.4. Maximizing power savings Limiting the maximum CPU frequency is recommended to achieve maximum power savings. Enabling C-states on the non-critical workload CPUs without restricting the maximum CPU frequency negates much of the power savings by boosting the frequency of the critical CPUs. Maximize power savings by updating the sysfs plugin fields, setting an appropriate value for max_perf_pct in the TunedPerformancePatch CR for the reference configuration. This example based on the group-du-sno-ranGen.yaml describes the procedure to follow to restrict the maximum CPU frequency. Prerequisites You have configured power savings mode as described in "Using PolicyGenTemplate CRs to configure power savings mode". Procedure Update the PolicyGenTemplate entry for TunedPerformancePatch in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates . To maximize power savings, add max_perf_pct as shown in the following example: - fileName: TunedPerformancePatch.yaml policyName: "config-policy" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1 1 The max_perf_pct controls the maximum frequency the cpufreq driver is allowed to set as a percentage of the maximum supported CPU frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Note To maximize power savings, set a lower value. Setting a lower value for max_perf_pct limits the maximum CPU frequency, thereby reducing power consumption, but also potentially impacting performance. Experiment with different values and monitor the system's performance and power consumption to find the optimal setting for your use-case. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 22.10.7. Configuring LVM Storage using PolicyGenTemplate CRs You can configure Logical Volume Manager (LVM) Storage for managed clusters that you deploy with GitOps Zero Touch Provisioning (ZTP). Note You use LVM Storage to persist event subscriptions when you use PTP events or bare-metal hardware events with HTTP transport. Use the Local Storage Operator for persistent storage that uses local volumes in distributed units. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Create a Git repository where you manage your custom site configuration data. Procedure To configure LVM Storage for new managed clusters, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: - fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.14 policyName: subscription-policies Note The Storage LVMO subscription is deprecated. In future releases of OpenShift Container Platform, the storage LVMO subscription will not be available. Instead, you must use the Storage LVMS subscription. In OpenShift Container Platform 4.14, you can use the Storage LVMS subscription instead of the LVMO subscription. The LVMS subscription does not require manual overrides in the common-ranGen.yaml file. Add the following YAML to spec.sourceFiles in the common-ranGen.yaml file to use the Storage LVMS subscription: - fileName: StorageLVMSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMSubscription.yaml policyName: subscription-policies Add the LVMCluster CR to spec.sourceFiles in your specific group or individual site configuration file. For example, in the group-du-sno-ranGen.yaml file, add the following: - fileName: StorageLVMCluster.yaml policyName: "lvms-config" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 1 This example configuration creates a volume group ( vg1 ) with all the available devices, except the disk where OpenShift Container Platform is installed. A thin-pool logical volume is also created. Merge any other required changes and files with your custom site repository. Commit the PolicyGenTemplate changes in Git, and then push the changes to your site configuration repository to deploy LVM Storage to new sites using GitOps ZTP. 22.10.8. Configuring PTP events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure PTP events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 22.10.8.1. Configuring PTP events that use HTTP transport You can configure PTP events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the transport host: - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043 Note In OpenShift Container Platform 4.13 or later, you do not need to set the transportHost field in the PtpOperatorConfig resource when you use HTTP transport with PTP events. Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be PtpConfigMaster.yaml or PtpConfigSlave.yaml depending on your requirements. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Using PolicyGenTemplate CRs to override source CRs content 22.10.8.2. Configuring PTP events that use AMQP transport You can configure PTP events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Add the following YAML into .spec.sourceFiles in the common-ranGen.yaml file to configure the AMQP Operator: #AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the AMQ transport host to the config-policy : - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: "amqp://amq-router.amq-router.svc.cluster.local" Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be PtpConfigMaster.yaml or PtpConfigSlave.yaml depending on your requirements. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Apply the following PolicyGenTemplate changes to your specific site YAML files, for example, example-sno-site.yaml : In .sourceFiles , add the Interconnect CR file that configures the AMQ router to the config-policy : - fileName: AmqInstance.yaml policyName: "config-policy" Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Installing the AMQ messaging bus For more information about container image registries, see OpenShift image registry overview . 22.10.9. Configuring bare-metal events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure bare-metal events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 22.10.9.1. Configuring bare-metal events that use HTTP transport You can configure bare-metal events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Configure the Bare Metal Event Relay Operator by adding the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml 1 policyName: "config-policy" spec: nodeSelector: {} transportHost: "http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043" logLevel: "info" 1 Each baseboard management controller (BMC) requires a single HardwareEvent CR only. Note In OpenShift Container Platform 4.13 or later, you do not need to set the transportHost field in the HardwareEvent custom resource (CR) when you use HTTP transport with bare-metal events. Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy bare-metal events to new sites with GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" Additional resources Installing the Bare Metal Event Relay using the CLI Creating the bare-metal event and Secret CRs 22.10.9.2. Configuring bare-metal events that use AMQP transport You can configure bare-metal events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" # Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the Interconnect CR to .spec.sourceFiles in the site configuration file, for example, the example-sno-site.yaml file: - fileName: AmqInstance.yaml policyName: "config-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml policyName: "config-policy" spec: nodeSelector: {} transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local" 1 logLevel: "info" 1 The transportHost URL is composed of the existing AMQ Interconnect CR name and namespace . For example, in transportHost: "amqp://amq-router.amq-router.svc.cluster.local" , the AMQ Interconnect name and namespace are both set to amq-router . Note Each baseboard management controller (BMC) requires a single HardwareEvent resource only. Commit the PolicyGenTemplate change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" 22.10.10. Configuring the Image Registry Operator for local caching of images OpenShift Container Platform manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times. Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the /var/lib/containers/storage directory in the case of an unexpected shutdown. To address long image download times, you can create a local image registry on remote managed clusters using GitOps Zero Touch Provisioning (ZTP). This is useful in Edge computing scenarios where clusters are deployed at the far edge of the network. Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the SiteConfig CR that you use to install the remote managed cluster. After installation, you configure the local image registry using a PolicyGenTemplate CR. Then, the GitOps ZTP pipeline creates Persistent Volume (PV) and Persistent Volume Claim (PVC) CRs and patches the imageregistry configuration. Note The local image registry can only be used for user application images and cannot be used for the OpenShift Container Platform or Operator Lifecycle Manager operator images. Additional resources OpenShift Container Platform registry overview . 22.10.10.1. Configuring disk partitioning with SiteConfig Configure disk partitioning for a managed cluster using a SiteConfig CR and GitOps Zero Touch Provisioning (ZTP). The disk partition details in the SiteConfig CR must match the underlying disk. Important You must complete this procedure at installation time. Prerequisites Install Butane. Procedure Create the storage.bu file by using the following example YAML file: variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation fails. 3 Specify the size of the partition. If the value is too small, the deployments fails. Convert the storage.bu to an Ignition file by running the following command: USD butane storage.bu Example output {"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}} Use a tool such as JSON Pretty Print to convert the output into JSON format. Copy the output into the .spec.clusters.nodes.ignitionConfigOverride field in the SiteConfig CR. Example [...] spec: clusters: - nodes: - ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } [...] Note If the .spec.clusters.nodes.ignitionConfigOverride field does not exist, create it. Verification During or after installation, verify on the hub cluster that the BareMetalHost object shows the annotation by running the following command: USD oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"] Example output "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}" After installation, check the single-node OpenShift disk status. Enter into a debug session on the single-node OpenShift node by running the following command. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-sno-node Set /host as the root directory within the debug shell by running the following command. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host List information about all available block devices by running the following command: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers Display information about the file system disk space usage by running the following command: # df -h Example output Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000 22.10.10.2. Configuring the image registry using PolicyGenTemplate CRs Use PolicyGenTemplate (PGT) CRs to apply the CRs required to configure the image registry and patch the imageregistry configuration. Prerequisites You have configured a disk partition in the managed cluster. You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP). Procedure Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate PolicyGenTemplate CR. For example, to configure an individual site, add the following YAML to the file example-sno-site.yaml : sourceFiles: # storage class - fileName: StorageClass.yaml policyName: "sc-for-image-registry" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: "100" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: "pvc-for-image-registry" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: "pv-for-image-registry" metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" - fileName: ImageRegistryConfig.yaml policyName: "config-for-image-registry" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: storage: pvc: claim: "image-registry-pvc" 1 Set the appropriate value for ztp-deploy-wave depending on whether you are configuring image registries at the site, common, or group level. ztp-deploy-wave: "100" is suitable for development or testing because it allows you to group the referenced source files together. 2 In ImageRegistryPV.yaml , ensure that the spec.local.path field is set to /var/imageregistry to match the value set for the mount_point field in the SiteConfig CR. Important Do not set complianceType: mustonlyhave for the - fileName: ImageRegistryConfig.yaml configuration. This can cause the registry pod deployment to fail. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. Verification Use the following steps to troubleshoot errors with the local image registry on the managed clusters: Verify successful login to the registry while logged in to the managed cluster. Run the following commands: Export the managed cluster name: USD cluster=<managed_cluster_name> Get the managed cluster kubeconfig details: USD oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster Download and export the cluster kubeconfig : USD oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster Verify access to the image registry from the managed cluster. See "Accessing the registry". Check that the Config CRD in the imageregistry.operator.openshift.io group instance is not reporting errors. Run the following command while logged in to the managed cluster: USD oc get image.config.openshift.io cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2021-10-08T19:02:39Z" generation: 5 name: cluster resourceVersion: "688678648" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice Check that the PersistentVolumeClaim on the managed cluster is populated with data. Run the following command while logged in to the managed cluster: USD oc get pv image-registry-sc Check that the registry* pod is running and is located under the openshift-image-registry namespace. USD oc get pods -n openshift-image-registry | grep registry* Example output cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d Check that the disk partition on the managed cluster is correct: Open a debug shell to the managed cluster: USD oc debug node/sno-1.example.com Run lsblk to check the host disk partitions: sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom 1 /var/imageregistry indicates that the disk is correctly partitioned. Additional resources Accessing the registry 22.10.11. Using hub templates in PolicyGenTemplate CRs Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP). Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values. Important Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created. The following supported hub template functions are available for use in GitOps ZTP with TALM: fromConfigmap returns the value of the provided data key in the named ConfigMap resource. Note There is a 1 MiB size limit for ConfigMap CRs. The effective size for ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap : argocd.argoproj.io/sync-options: Replace=true base64enc returns the base64-encoded value of the input string base64dec returns the decoded value of the base64-encoded input string indent returns the input string with added indent spaces autoindent returns the input string with added indent spaces based on the spacing used in the parent template toInt casts and returns the integer value of the input value toBool converts the input string into a boolean value, and returns the boolean Various Open source community functions are also available for use with GitOps ZTP. Additional resources RHACM support for hub cluster templates in configuration policies 22.10.11.1. Example hub templates The following code examples are valid hub templates. Each of these templates return values from the ConfigMap CR with the name test-config in the default namespace. Returns the value with the key common-key : {{hub fromConfigMap "default" "test-config" "common-key" hub}} Returns a string by using the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}} Casts and returns a boolean value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}} Casts and returns an integer value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}} 22.10.11.2. Specifying host NICs in site PolicyGenTemplate CRs with hub cluster templates You can manage host NICs in a single ConfigMap CR and use hub cluster templates to populate the custom NIC values in the generated polices that get applied to the cluster hosts. Using hub cluster templates in site PolicyGenTemplate (PGT) CRs means that you do not need to create multiple single site PGT CRs for each site. The following example shows you how to use a single ConfigMap CR to manage cluster host NICs and apply them to the cluster as polices by using a single PolicyGenTemplate site CR. Note When you use the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application. Procedure Create a ConfigMap resource that describes the NICs for a group of hosts. For example: apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: "8" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: "10" example-sno-du_fh-vlan: "140" example-sno-du_mh-numVfs: "8" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: "10" example-sno-du_mh-vlan: "150" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Note The ConfigMap must be in the same namespace with the policy that has the hub template substitution. Commit the ConfigMap CR in Git, and then push to the Git repository being monitored by the Argo CD application. Create a site PGT CR that uses templates to pull the required data from the ConfigMap object. For example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "site" namespace: "ztp-site" spec: remediationAction: inform bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" spec: resourceName: du_fh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-pf" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-pf" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_mh Commit the site PolicyGenTemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". 22.10.11.3. Specifying VLAN IDs in group PolicyGenTemplate CRs with hub cluster templates You can manage VLAN IDs for managed clusters in a single ConfigMap CR and use hub cluster templates to populate the VLAN IDs in the generated polices that get applied to the clusters. The following example shows how you how manage VLAN IDs in single ConfigMap CR and apply them in individual cluster polices by using a single PolicyGenTemplate group CR. Note When using the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a ConfigMap CR that describes the VLAN IDs for a group of cluster hosts. For example: apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: "101" site-2-vlan: "234" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Note The ConfigMap must be in the same namespace with the policy that has the hub template substitution. Commit the ConfigMap CR in Git, and then push to the Git repository being monitored by the Argo CD application. Create a group PGT CR that uses a hub template to pull the required VLAN IDs from the ConfigMap object. For example, add the following YAML snippet to the group PGT CR: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data" (printf "%s-vlan" .ManagedClusterName) | toInt hub}}' Commit the group PolicyGenTemplate CR in Git, and then push to the Git repository being monitored by the Argo CD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". 22.10.11.4. Syncing new ConfigMap changes to existing PolicyGenTemplate CRs Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a PolicyGenTemplate CR that pulls information from a ConfigMap CR using hub cluster templates. Procedure Update the contents of your ConfigMap CR, and apply the changes in the hub cluster. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following: Option 1: Delete the existing policy. ArgoCD uses the PolicyGenTemplate CR to immediately recreate the deleted policy. For example, run the following command: USD oc delete policy <policy_name> -n <policy_namespace> Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap . For example: USD oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1" Note You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing . Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example: USD oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace> Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml : apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated policy: USD oc apply -f cgr-example.yaml 22.11. Updating managed clusters with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of multiple clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. 22.11.1. About the Topology Aware Lifecycle Manager configuration The Topology Aware Lifecycle Manager (TALM) manages the deployment of Red Hat Advanced Cluster Management (RHACM) policies for one or more OpenShift Container Platform clusters. Using TALM in a large network of clusters allows the phased rollout of policies to the clusters in limited batches. This helps to minimize possible service disruptions when updating. With TALM, you can control the following actions: The timing of the update The number of RHACM-managed clusters The subset of managed clusters to apply the policies to The update order of the clusters The set of policies remediated to the cluster The order of policies remediated to the cluster The assignment of a canary cluster For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) offers the following features: Create a backup of a deployment before an upgrade Pre-caching images for clusters with limited bandwidth TALM supports the orchestration of the OpenShift Container Platform y-stream and z-stream updates, and day-two operations on y-streams and z-streams. 22.11.2. About managed policies used with Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) uses RHACM policies for cluster updates. TALM can be used to manage the rollout of any policy CR where the remediationAction field is set to inform . Supported use cases include the following: Manual user creation of policy CRs Automatically generated policies from the PolicyGenTemplate custom resource definition (CRD) For policies that update an Operator subscription with manual approval, TALM provides additional functionality that approves the installation of the updated Operator. For more information about managed policies, see Policy Overview in the RHACM documentation. For more information about the PolicyGenTemplate CRD, see the "About the PolicyGenTemplate CRD" section in "Configuring managed clusters with policies and PolicyGenTemplate resources". 22.11.3. Installing the Topology Aware Lifecycle Manager by using the web console You can use the OpenShift Container Platform web console to install the Topology Aware Lifecycle Manager. Prerequisites Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected regitry. Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Topology Aware Lifecycle Manager from the list of available Operators, and then click Install . Keep the default selection of Installation mode ["All namespaces on the cluster (default)"] and Installed Namespace ("openshift-operators") to ensure that the Operator is installed properly. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the All Namespaces namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any containers in the cluster-group-upgrades-controller-manager pod that are reporting issues. 22.11.4. Installing the Topology Aware Lifecycle Manager by using the CLI You can use the OpenShift CLI ( oc ) to install the Topology Aware Lifecycle Manager (TALM). Prerequisites Install the OpenShift CLI ( oc ). Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected registry. Log in as a user with cluster-admin privileges. Procedure Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, talm-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: "stable" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f talm-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.14.x Topology Aware Lifecycle Manager 4.14.x Succeeded Verify that the TALM is up and running: USD oc get deploy -n openshift-operators Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s 22.11.5. About the ClusterGroupUpgrade CR The Topology Aware Lifecycle Manager (TALM) builds the remediation plan from the ClusterGroupUpgrade CR for a group of clusters. You can define the following specifications in a ClusterGroupUpgrade CR: Clusters in the group Blocking ClusterGroupUpgrade CRs Applicable list of managed policies Number of concurrent updates Applicable canary updates Actions to perform before and after the update Update timing You can control the start time of an update using the enable field in the ClusterGroupUpgrade CR. For example, if you have a scheduled maintenance window of four hours, you can prepare a ClusterGroupUpgrade CR with the enable field set to false . You can set the timeout by configuring the spec.remediationStrategy.timeout setting as follows: spec remediationStrategy: maxConcurrency: 1 timeout: 240 You can use the batchTimeoutAction to determine what happens if an update fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or abort to stop policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. To apply the changes, you set the enabled field to true . For more information see the "Applying update policies to managed clusters" section. As TALM works through remediation of the policies to the specified clusters, the ClusterGroupUpgrade CR can report true or false statuses for a number of conditions. Note After TALM completes a cluster update, the cluster does not update again under the control of the same ClusterGroupUpgrade CR. You must create a new ClusterGroupUpgrade CR in the following cases: When you need to update the cluster again When the cluster changes to non-compliant with the inform policy after being updated 22.11.5.1. Selecting clusters TALM builds a remediation plan and selects clusters based on the following fields: The clusterLabelSelector field specifies the labels of the clusters that you want to update. This consists of a list of the standard label selectors from k8s.io/apimachinery/pkg/apis/meta/v1 . Each selector in the list uses either label value pairs or label expressions. Matches from each selector are added to the final list of clusters along with the matches from the clusterSelector field and the cluster field. The clusters field specifies a list of clusters to update. The canaries field specifies the clusters for canary updates. The maxConcurrency field specifies the number of clusters to update in a batch. The actions field specifies beforeEnable actions that TALM takes as it begins the update process, and afterCompletion actions that TALM takes as it completes policy remediation for each cluster. You can use the clusters , clusterLabelSelector , and clusterSelector fields together to create a combined list of clusters. The remediation plan starts with the clusters listed in the canaries field. Each canary cluster forms a single-cluster batch. Sample ClusterGroupUpgrade CR with the enabled field set to false apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: "" deleteClusterLabels: upgrade-running: "" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: "" backup: false clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: 1 Specifies the action that TALM takes when it completes policy remediation for each cluster. 2 Specifies the action that TALM takes as it begins the update process. 3 Defines the list of clusters to update. 4 The enable field is set to false . 5 Lists the user-defined set of policies to remediate. 6 Defines the specifics of the cluster updates. 7 Defines the clusters for canary updates. 8 Defines the maximum number of concurrent updates in a batch. The number of remediation batches is the number of canary clusters, plus the number of clusters, except the canary clusters, divided by the maxConcurrency value. The clusters that are already compliant with all the managed policies are excluded from the remediation plan. 9 Displays the parameters for selecting clusters. 10 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . 11 Displays information about the status of the updates. 12 The ClustersSelected condition shows that all selected clusters are valid. 13 The Validated condition shows that all selected clusters have been validated. Note Any failures during the update of a canary cluster stops the update process. When the remediation plan is successfully created, you can you set the enable field to true and TALM starts to update the non-compliant clusters with the specified managed policies. Note You can only make changes to the spec fields if the enable field of the ClusterGroupUpgrade CR is set to false . 22.11.5.2. Validating TALM checks that all specified managed policies are available and correct, and uses the Validated condition to report the status and reasons as follows: true Validation is completed. false Policies are missing or invalid, or an invalid platform image has been specified. 22.11.5.3. Pre-caching Clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. On single-node OpenShift clusters, you can use pre-caching to avoid this. The container image pre-caching starts when you create a ClusterGroupUpgrade CR with the preCaching field set to true . TALM compares the available disk space with the estimated OpenShift Container Platform image size to ensure that there is enough space. If a cluster has insufficient space, TALM cancels pre-caching for that cluster and does not remediate policies on it. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. For more information see the "Using the container image pre-cache feature" section. 22.11.5.4. Creating a backup For single-node OpenShift, TALM can create a backup of a deployment before an update. If the update fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a ClusterGroupUpgrade CR with the backup field set to true . To ensure that the contents of the backup are up to date, the backup is not taken until you set the enable field in the ClusterGroupUpgrade CR to true . TALM uses the BackupSucceeded condition to report the status and reasons as follows: true Backup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update fails for that cluster but proceeds for all other clusters. false Backup is still in progress for one or more clusters or has failed for all clusters. For more information, see the "Creating a backup of cluster resources before upgrade" section. 22.11.5.5. Updating clusters TALM enforces the policies following the remediation plan. Enforcing the policies for subsequent batches starts immediately after all the clusters of the current batch are compliant with all the managed policies. If the batch times out, TALM moves on to the batch. The timeout value of a batch is the spec.timeout field divided by the number of batches in the remediation plan. TALM uses the Progressing condition to report the status and reasons as follows: true TALM is remediating non-compliant policies. false The update is not in progress. Possible reasons for this are: All clusters are compliant with all the managed policies. The update has timed out as policy remediation took too long. Blocking CRs are missing from the system or have not yet completed. The ClusterGroupUpgrade CR is not enabled. Backup is still in progress. Note The managed policies apply in the order that they are listed in the managedPolicies field in the ClusterGroupUpgrade CR. One managed policy is applied to the specified clusters at a time. When a cluster complies with the current policy, the managed policy is applied to it. Sample ClusterGroupUpgrade CR in the Progressing state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 1 The Progressing fields show that TALM is in the process of remediating policies. 22.11.5.6. Update status TALM uses the Succeeded condition to report the status and reasons as follows: true All clusters are compliant with the specified managed policies. false Policy remediation failed as there were no clusters available for remediation, or because policy remediation took too long for one of the following reasons: The current batch contains canary updates and the cluster in the batch does not comply with all the managed policies within the batch timeout. Clusters did not comply with the managed policies within the timeout value specified in the remediationStrategy field. Sample ClusterGroupUpgrade CR in the Succeeded state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: "True" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: "True" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: "False" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: "True" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 2 In the Progressing fields, the status is false as the update has completed; clusters are compliant with all the managed policies. 3 The Succeeded fields show that the validations completed successfully. 1 The status field includes a list of clusters and their respective statuses. The status of a cluster can be complete or timedout . Sample ClusterGroupUpgrade CR in the timedout state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z' 1 If a cluster's state is timedout , the currentPolicy field shows the name of the policy and the policy status. 2 The status for succeeded is false and the message indicates that policy remediation took too long. 22.11.5.7. Blocking ClusterGroupUpgrade CRs You can create multiple ClusterGroupUpgrade CRs and control their order of application. For example, if you create ClusterGroupUpgrade CR C that blocks the start of ClusterGroupUpgrade CR A, then ClusterGroupUpgrade CR A cannot start until the status of ClusterGroupUpgrade CR C becomes UpgradeComplete . One ClusterGroupUpgrade CR can have multiple blocking CRs. In this case, all the blocking CRs must complete before the upgrade for the current CR can start. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the content of the ClusterGroupUpgrade CRs in the cgu-a.yaml , cgu-b.yaml , and cgu-c.yaml files. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 1 Defines the blocking CRs. The cgu-a update cannot start until cgu-c is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 The cgu-b update cannot start until cgu-a is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {} 1 The cgu-c update does not have any blocking CRs. TALM starts the cgu-c update when the enable field is set to true . Create the ClusterGroupUpgrade CRs by running the following command for each relevant CR: USD oc apply -f <name>.yaml Start the update process by running the following command for each relevant CR: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> \ --type merge -p '{"spec":{"enable":true}}' The following examples show ClusterGroupUpgrade CRs where the enable field is set to true : Example for cgu-a with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {} 1 Shows the list of blocking CRs. Example for cgu-b with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 Shows the list of blocking CRs. Example for cgu-c with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0 1 The cgu-c update does not have any blocking CRs. 22.11.6. Update policies on managed clusters The Topology Aware Lifecycle Manager (TALM) remediates a set of inform policies for the clusters specified in the ClusterGroupUpgrade CR. TALM remediates inform policies by making enforce copies of the managed RHACM policies. Each copied policy has its own corresponding RHACM placement rule and RHACM placement binding. One by one, TALM adds each cluster from the current batch to the placement rule that corresponds with the applicable managed policy. If a cluster is already compliant with a policy, TALM skips applying that policy on the compliant cluster. TALM then moves on to applying the policy to the non-compliant cluster. After TALM completes the updates in a batch, all clusters are removed from the placement rules associated with the copied policies. Then, the update of the batch starts. If a spoke cluster does not report any compliant state to RHACM, the managed policies on the hub cluster can be missing status information that TALM needs. TALM handles these cases in the following ways: If a policy's status.compliant field is missing, TALM ignores the policy and adds a log entry. Then, TALM continues looking at the policy's status.status field. If a policy's status.status is missing, TALM produces an error. If a cluster's compliance status is missing in the policy's status.status field, TALM considers that cluster to be non-compliant with that policy. The ClusterGroupUpgrade CR's batchTimeoutAction determines what happens if an upgrade fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or specify abort to stop the policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. Example upgrade policy apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.14.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.14 desiredUpdate: version: 4.4.14.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.14.4 remediationAction: inform severity: low remediationAction: inform For more information about RHACM policies, see Policy overview . Additional resources For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD . 22.11.6.1. Configuring Operator subscriptions for managed clusters that you install with TALM Topology Aware Lifecycle Manager (TALM) can only approve the install plan for an Operator if the Subscription custom resource (CR) of the Operator contains the status.state.AtLatestKnown field. Procedure Add the status.state.AtLatestKnown field to the Subscription CR of the Operator: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1 1 The status.state: AtLatestKnown field is used for the latest Operator version available from the Operator catalog. Note When a new version of the Operator is available in the registry, the associated policy becomes non-compliant. Apply the changed Subscription policy to your managed clusters with a ClusterGroupUpgrade CR. 22.11.6.2. Applying update policies to managed clusters You can update your managed clusters by applying your policies. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the contents of the ClusterGroupUpgrade CR in the cgu-1.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5 1 The name of the policies to apply. 2 The list of clusters to update. 3 The maxConcurrency field signifies the number of clusters updated at the same time. 4 The update timeout in minutes. 5 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-1.yaml Check if the ClusterGroupUpgrade CR was created in the hub cluster by running the following command: USD oc get cgu --all-namespaces Example output NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled Check the status of the update by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Not enabled", 1 "reason": "NotEnabled", "status": "False", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": {} } 1 The spec.enable field in the ClusterGroupUpgrade CR is set to false . Check the status of the policies by running the following command: USD oc get policies -A Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m 1 The spec.remediationAction field of policies currently applied on the clusters is set to enforce . The managed policies in inform mode from the ClusterGroupUpgrade CR remain in inform mode during the update. Change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the update again by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ 1 { "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "All selected clusters are valid", "reason": "ClusterSelectionCompleted", "status": "True", "type": "ClustersSelected", "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "Completed validation", "reason": "ValidationCompleted", "status": "True", "type": "Validated", "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": { "currentBatch": 1, "currentBatchStartedAt": "2022-02-25T15:54:16Z", "remediationPlanForBatch": { "spoke1": 0, "spoke2": 1 }, "startedAt": "2022-02-25T15:54:16Z" } } 1 Reflects the update progress of the current batch. Run this command again to receive updated information about the progress. If the policies include Operator subscriptions, you can check the installation progress directly on the single-node cluster. Export the KUBECONFIG file of the single-node cluster you want to check the installation progress for by running the following command: USD export KUBECONFIG=<cluster_kubeconfig_absolute_path> Check all the subscriptions present on the single-node cluster and look for the one in the policy you are trying to install through the ClusterGroupUpgrade CR by running the following command: USD oc get subs -A | grep -i <subscription_name> Example output for cluster-logging policy NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable If one of the managed policies includes a ClusterVersion CR, check the status of platform updates in the current batch by running the following command against the spoke cluster: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.14.5 True True 43s Working towards 4.4.14.7: 71 of 735 done (9% complete) Check the Operator subscription by running the following command: USD oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath="{.status}" Check the install plans present on the single-node cluster that is associated with the desired subscription by running the following command: USD oc get installplan -n <subscription_namespace> Example output for cluster-logging Operator NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1 1 The install plans have their Approval field set to Manual and their Approved field changes from false to true after TALM approves the install plan. Note When TALM is remediating a policy containing a subscription, it automatically approves any install plans attached to that subscription. Where multiple install plans are needed to get the operator to the latest known version, TALM might approve multiple install plans, upgrading through one or more intermediate versions to get to the final version. Check if the cluster service version for the Operator of the policy that the ClusterGroupUpgrade is installing reached the Succeeded phase by running the following command: USD oc get csv -n <operator_namespace> Example output for OpenShift Logging Operator NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded 22.11.7. Creating a backup of cluster resources before upgrade For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) can create a backup of a deployment before an upgrade. If the upgrade fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a ClusterGroupUpgrade CR with the backup field set to true . To ensure that the contents of the backup are up to date, the backup is not taken until you set the enable field in the ClusterGroupUpgrade CR to true . TALM uses the BackupSucceeded condition to report the status and reasons as follows: true Backup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update does not proceed for that cluster. false Backup is still in progress for one or more clusters or has failed for all clusters. The backup process running in the spoke clusters can have the following statuses: PreparingToStart The first reconciliation pass is in progress. The TALM deletes any spoke backup namespace and hub view resources that have been created in a failed upgrade attempt. Starting The backup prerequisites and backup job are being created. Active The backup is in progress. Succeeded The backup succeeded. BackupTimeout Artifact backup is partially done. UnrecoverableError The backup has ended with a non-zero exit code. Note If the backup of a cluster fails and enters the BackupTimeout or UnrecoverableError state, the cluster update does not proceed for that cluster. Updates to other clusters are not affected and continue. 22.11.7.1. Creating a ClusterGroupUpgrade CR with backup You can create a backup of a deployment before an upgrade on single-node OpenShift clusters. If the upgrade fails you can use the upgrade-recovery.sh script generated by Topology Aware Lifecycle Manager (TALM) to return the system to its preupgrade state. The backup consists of the following items: Cluster backup A snapshot of etcd and static pod manifests. Content backup Backups of folders, for example, /etc , /usr/local , /var/lib/kubelet . Changed files backup Any files managed by machine-config that have been changed. Deployment A pinned ostree deployment. Images (Optional) Any container images that are in use. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Install Red Hat Advanced Cluster Management (RHACM). Note It is highly recommended that you create a recovery partition. The following is an example SiteConfig custom resource (CR) for a recovery partition of 50 GB: nodes: - hostName: "node-1.example.com" role: "master" rootDeviceHints: hctl: "0:2:0:0" deviceName: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 ... #Disk /dev/disk/by-id/scsi-3600508b400105e210000900000490000: #893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - mount_point: /var/recovery size: 51200 start: 800000 Procedure Save the contents of the ClusterGroupUpgrade CR with the backup and enable fields set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 To start the update, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check the status of the upgrade in the hub cluster by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "backup": { "clusters": [ "cnfdb2", "cnfdb1" ], "status": { "cnfdb1": "Succeeded", "cnfdb2": "Failed" 1 } }, "computedMaxConcurrency": 1, "conditions": [ { "lastTransitionTime": "2022-04-05T10:37:19Z", "message": "Backup failed for 1 cluster", 2 "reason": "PartiallyDone", 3 "status": "True", 4 "type": "Succeeded" } ], "precaching": { "spec": {} }, "status": {} 1 Backup has failed for one cluster. 2 The message confirms that the backup failed for one cluster. 3 The backup was partially successful. 4 The backup process has finished. 22.11.7.2. Recovering a cluster after a failed upgrade If an upgrade of a cluster fails, you can manually log in to the cluster and use the backup to return the cluster to its preupgrade state. There are two stages: Rollback If the attempted upgrade included a change to the platform OS deployment, you must roll back to the version before running the recovery script. Important A rollback is only applicable to upgrades from TALM and single-node OpenShift. This process does not apply to rollbacks from any other upgrade type. Recovery The recovery shuts down containers and uses files from the backup partition to relaunch containers and restore clusters. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Install Red Hat Advanced Cluster Management (RHACM). Log in as a user with cluster-admin privileges. Run an upgrade that is configured for backup. Procedure Delete the previously created ClusterGroupUpgrade custom resource (CR) by running the following command: USD oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno Log in to the cluster that you want to recover. Check the status of the platform OS deployment by running the following command: USD ostree admin status Example outputs [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9 1 The current deployment is pinned. A platform OS deployment rollback is not necessary. [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca 1 This platform OS deployment is marked for rollback. 2 The deployment is pinned and can be rolled back. To trigger a rollback of the platform OS deployment, run the following command: USD rpm-ostree rollback -r The first phase of the recovery shuts down containers and restores files from the backup partition to the targeted directories. To begin the recovery, run the following command: USD /var/recovery/upgrade-recovery.sh When prompted, reboot the cluster by running the following command: USD systemctl reboot After the reboot, restart the recovery by running the following command: USD /var/recovery/upgrade-recovery.sh --resume Note If the recovery utility fails, you can retry with the --restart option: USD /var/recovery/upgrade-recovery.sh --restart Verification To check the status of the recovery run the following command: USD oc get clusterversion,nodes,clusteroperator Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.14.23 True False 86d Cluster version is 4.4.14.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.14.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.14.23 True False False 86d .............. 1 The cluster version is available and has the correct version. 2 The node status is Ready . 3 The ClusterOperator object's availability is True . 22.11.8. Using the container image pre-cache feature Single-node OpenShift clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. Note The time of the update is not set by TALM. You can apply the ClusterGroupUpgrade CR at the beginning of the update by manual application or by external automation. The container image pre-caching starts when the preCaching field is set to true in the ClusterGroupUpgrade CR. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. After a successful pre-caching process, you can start remediating policies. The remediation actions start when the enable field is set to true . If there is a pre-caching failure on a cluster, the upgrade fails for that cluster. The upgrade process continues for all other clusters that have a successful pre-cache. The pre-caching process can be in the following statuses: NotStarted This is the initial state all clusters are automatically assigned to on the first reconciliation pass of the ClusterGroupUpgrade CR. In this state, TALM deletes any pre-caching namespace and hub view resources of spoke clusters that remain from incomplete updates. TALM then creates a new ManagedClusterView resource for the spoke pre-caching namespace to verify its deletion in the PrecachePreparing state. PreparingToStart Cleaning up any remaining resources from incomplete updates is in progress. Starting Pre-caching job prerequisites and the job are created. Active The job is in "Active" state. Succeeded The pre-cache job succeeded. PrecacheTimeout The artifact pre-caching is partially done. UnrecoverableError The job ends with a non-zero exit code. 22.11.8.1. Using the container image pre-cache filter The pre-cache feature typically downloads more images than a cluster needs for an update. You can control which pre-cache images are downloaded to a cluster. This decreases download time, and saves bandwidth and storage. You can see a list of all images to be downloaded using the following command: USD oc adm release info <ocp-version> The following ConfigMap example shows how you can exclude images using the excludePrecachePatterns field. apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba 1 TALM excludes all images with names that include any of the patterns listed here. 22.11.8.2. Creating a ClusterGroupUpgrade CR with pre-caching For single-node OpenShift, the pre-cache feature allows the required container images to be present on the spoke cluster before the update starts. Note For pre-caching, TALM uses the spec.remediationStrategy.timeout value from the ClusterGroupUpgrade CR. You must set a timeout value that allows sufficient time for the pre-caching job to complete. When you enable the ClusterGroupUpgrade CR after pre-caching has completed, you can change the timeout value to a duration that is appropriate for the update. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Procedure Save the contents of the ClusterGroupUpgrade CR with the preCaching field set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 1 The preCaching field is set to true , which enables TALM to pull the container images before starting the update. When you want to start pre-caching, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check if the ClusterGroupUpgrade CR exists in the hub cluster by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1 1 The CR is created. Check the status of the pre-caching task by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "conditions": [ { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is required and not done", "reason": "InProgress", "status": "False", "type": "PrecachingSucceeded" }, { "lastTransitionTime": "2022-01-27T19:07:34Z", "message": "Pre-caching spec is valid and consistent", "reason": "PrecacheSpecIsWellFormed", "status": "True", "type": "PrecacheSpecValid" } ], "precaching": { "clusters": [ "cnfdb1" 1 "cnfdb2" ], "spec": { "platformImage": "image.example.io"}, "status": { "cnfdb1": "Active" "cnfdb2": "Succeeded"} } } 1 Displays the list of identified clusters. Check the status of the pre-caching job by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache Example output NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s Check the status of the ClusterGroupUpgrade CR by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output "conditions": [ { "lastTransitionTime": "2022-01-27T19:30:41Z", "message": "The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies", "reason": "UpgradeCompleted", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:28:57Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingSucceeded" 1 } 1 The pre-cache tasks are done. 22.11.9. Troubleshooting the Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) is an OpenShift Container Platform Operator that remediates RHACM policies. When issues occur, use the oc adm must-gather command to gather details and logs and to take steps in debugging the issues. For more information about related topics, see the following documentation: Red Hat Advanced Cluster Management for Kubernetes 2.4 Support Matrix Red Hat Advanced Cluster Management Troubleshooting The "Troubleshooting Operator issues" section 22.11.9.1. General troubleshooting You can determine the cause of the problem by reviewing the following questions: Is the configuration that you are applying supported? Are the RHACM and the OpenShift Container Platform versions compatible? Are the TALM and RHACM versions compatible? Which of the following components is causing the problem? Section 22.11.9.3, "Managed policies" Section 22.11.9.4, "Clusters" Section 22.11.9.5, "Remediation Strategy" Section 22.11.9.6, "Topology Aware Lifecycle Manager" To ensure that the ClusterGroupUpgrade configuration is functional, you can do the following: Create the ClusterGroupUpgrade CR with the spec.enable field set to false . Wait for the status to be updated and go through the troubleshooting questions. If everything looks as expected, set the spec.enable field to true in the ClusterGroupUpgrade CR. Warning After you set the spec.enable field to true in the ClusterUpgradeGroup CR, the update procedure starts and you cannot edit the CR's spec fields anymore. 22.11.9.2. Cannot modify the ClusterUpgradeGroup CR Issue You cannot edit the ClusterUpgradeGroup CR after enabling the update. Resolution Restart the procedure by performing the following steps: Remove the old ClusterGroupUpgrade CR by running the following command: USD oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name> Check and fix the existing issues with the managed clusters and policies. Ensure that all the clusters are managed clusters and available. Ensure that all the policies exist and have the spec.remediationAction field set to inform . Create a new ClusterGroupUpgrade CR with the correct configurations. USD oc apply -f <ClusterGroupUpgradeCR_YAML> 22.11.9.3. Managed policies Checking managed policies on the system Issue You want to check if you have the correct managed policies on the system. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}' Example output ["group-du-sno-validator-du-validator-policy", "policy2-common-nto-sub-policy", "policy3-common-ptp-sub-policy"] Checking remediationAction mode Issue You want to check if the remediationAction field is set to inform in the spec of the managed policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h Checking policy compliance state Issue You want to check the compliance state of policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h 22.11.9.4. Clusters Checking if managed clusters are present Issue You want to check if the clusters in the ClusterGroupUpgrade CR are managed clusters. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h Alternatively, check the TALM manager logs: Get the name of the TALM manager by running the following command: USD oc get pod -n openshift-operators Example output NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m Check the TALM manager logs by running the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 The error message shows that the cluster is not a managed cluster. Checking if managed clusters are available Issue You want to check if the managed clusters specified in the ClusterGroupUpgrade CR are available. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2 1 2 The value of the AVAILABLE field is True for the managed clusters. Checking clusterLabelSelector Issue You want to check if the clusterLabelSelector field specified in the ClusterGroupUpgrade CR matches at least one of the managed clusters. Resolution Run the following command: USD oc get managedcluster --selector=upgrade=true 1 1 The label for the clusters you want to update is upgrade:true . Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Checking if canary clusters are present Issue You want to check if the canary clusters are present in the list of clusters. Example ClusterGroupUpgrade CR spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true Resolution Run the following commands: USD oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}' Example output ["spoke1", "spoke3"] Check if the canary clusters are present in the list of clusters that match clusterLabelSelector labels by running the following command: USD oc get managedcluster --selector=upgrade=true Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Note A cluster can be present in spec.clusters and also be matched by the spec.clusterLabelSelector label. Checking the pre-caching status on spoke clusters Check the status of pre-caching by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache 22.11.9.5. Remediation Strategy Checking if remediationStrategy is present in the ClusterGroupUpgrade CR Issue You want to check if the remediationStrategy is present in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}' Example output {"maxConcurrency":2, "timeout":240} Checking if maxConcurrency is specified in the ClusterGroupUpgrade CR Issue You want to check if the maxConcurrency is specified in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}' Example output 2 22.11.9.6. Topology Aware Lifecycle Manager Checking condition message and status in the ClusterGroupUpgrade CR Issue You want to check the value of the status.conditions field in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.conditions}' Example output {"lastTransitionTime":"2022-02-17T22:25:28Z", "message":"Missing managed policies:[policyList]", "reason":"NotAllManagedPoliciesExist", "status":"False", "type":"Validated"} Checking corresponding copied policies Issue You want to check if every policy from status.managedPoliciesForUpgrade has a corresponding policy in status.copiedPolicies . Resolution Run the following command: USD oc get cgu lab-upgrade -oyaml Example output status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default Checking if status.remediationPlan was computed Issue You want to check if status.remediationPlan is computed. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}' Example output [["spoke2", "spoke3"]] Errors in the TALM manager container Issue You want to check the logs of the manager container of TALM. Resolution Run the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 Displays the error. Clusters are not compliant to some policies after a ClusterGroupUpgrade CR has completed Issue The policy compliance status that TALM uses to decide if remediation is needed has not yet fully updated for all clusters. This may be because: The CGU was run too soon after a policy was created or updated. The remediation of a policy affects the compliance of subsequent policies in the ClusterGroupUpgrade CR. Resolution Create and apply a new ClusterGroupUpdate CR with the same specification. Auto-created ClusterGroupUpgrade CR in the GitOps ZTP workflow has no managed policies Issue If there are no policies for the managed cluster when the cluster becomes Ready , a ClusterGroupUpgrade CR with no policies is auto-created. Upon completion of the ClusterGroupUpgrade CR, the managed cluster is labeled as ztp-done . If the PolicyGenTemplate CRs were not pushed to the Git repository within the required time after SiteConfig resources were pushed, this might result in no policies being available for the target cluster when the cluster became Ready . Resolution Verify that the policies you want to apply are available on the hub cluster, then create a ClusterGroupUpgrade CR with the required policies. You can either manually create the ClusterGroupUpgrade CR or trigger auto-creation again. To trigger auto-creation of the ClusterGroupUpgrade CR, remove the ztp-done label from the cluster and delete the empty ClusterGroupUpgrade CR that was previously created in the zip-install namespace. Pre-caching has failed Issue Pre-caching might fail for one of the following reasons: There is not enough free space on the node. For a disconnected environment, the pre-cache image has not been properly mirrored. There was an issue when creating the pod. Resolution To check if pre-caching has failed due to insufficient space, check the log of the pre-caching pod in the node. Find the name of the pod using the following command: USD oc get pods -n openshift-talo-pre-cache Check the logs to see if the error is related to insufficient space using the following command: USD oc logs -n openshift-talo-pre-cache <pod name> If there is no log, check the pod status using the following command: USD oc describe pod -n openshift-talo-pre-cache <pod name> If the pod does not exist, check the job status to see why it could not create a pod using the following command: USD oc describe job -n openshift-talo-pre-cache pre-cache Additional resources For information about troubleshooting, see OpenShift Container Platform Troubleshooting Operator Issues . For more information about using Topology Aware Lifecycle Manager in the ZTP workflow, see Updating managed policies with Topology Aware Lifecycle Manager . For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD 22.12. Updating managed clusters in a disconnected environment with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of OpenShift Container Platform managed clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. Additional resources For more information about the Topology Aware Lifecycle Manager, see About the Topology Aware Lifecycle Manager . 22.12.1. Updating clusters in a disconnected environment You can upgrade managed clusters and Operators for managed clusters that you have deployed using GitOps Zero Touch Provisioning (ZTP) and Topology Aware Lifecycle Manager (TALM). 22.12.1.1. Setting up the environment TALM can perform both platform and Operator updates. You must mirror both the platform image and Operator images that you want to update to in your mirror registry before you can use TALM to update your disconnected clusters. Complete the following steps to mirror the images: For platform updates, you must perform the following steps: Mirror the desired OpenShift Container Platform image repository. Ensure that the desired platform image is mirrored by following the "Mirroring the OpenShift Container Platform image repository" procedure linked in the Additional resources. Save the contents of the imageContentSources section in the imageContentSources.yaml file: Example output imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Save the image signature of the desired platform image that was mirrored. You must add the image signature to the PolicyGenTemplate CR for platform updates. To get the image signature, perform the following steps: Specify the desired OpenShift Container Platform tag by running the following command: USD OCP_RELEASE_NUMBER=<release_version> Specify the architecture of the cluster by running the following command: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Get the release image digest from Quay by running the following command USD DIGEST="USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')" Set the digest algorithm by running the following command: USD DIGEST_ALGO="USD{DIGEST%%:*}" Set the digest signature by running the following command: USD DIGEST_ENCODED="USD{DIGEST#*:}" Get the image signature from the mirror.openshift.com website by running the following command: USD SIGNATURE_BASE64=USD(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1" | base64 -w0 && echo) Save the image signature to the checksum-<OCP_RELEASE_NUMBER>.yaml file by running the following commands: USD cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF Prepare the update graph. You have two options to prepare the update graph: Use the OpenShift Update Service. For more information about how to set up the graph on the hub cluster, see Deploy the operator for OpenShift Update Service and Build the graph data init container . Make a local copy of the upstream graph. Host the update graph on an http or https server in the disconnected environment that has access to the managed cluster. To download the update graph, use the following command: USD curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.14 -o ~/upgrade-graph_stable-4.14 For Operator updates, you must perform the following task: Mirror the Operator catalogs. Ensure that the desired operator images are mirrored by following the procedure in the "Mirroring Operator catalogs for use with disconnected clusters" section. Additional resources For more information about how to update GitOps Zero Touch Provisioning (ZTP), see Upgrading GitOps ZTP . For more information about how to mirror an OpenShift Container Platform image repository, see Mirroring the OpenShift Container Platform image repository . For more information about how to mirror Operator catalogs for disconnected clusters, see Mirroring Operator catalogs for use with disconnected clusters . For more information about how to prepare the disconnected environment and mirroring the desired image repository, see Preparing the disconnected environment . For more information about update channels and releases, see Understanding update channels and releases . 22.12.1.2. Performing a platform update You can perform a platform update with the TALM. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update GitOps Zero Touch Provisioning (ZTP) to the latest version. Provision one or more managed clusters with GitOps ZTP. Mirror the desired image repository. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Create a PolicyGenTemplate CR for the platform update: Save the following contents of the PolicyGenTemplate CR in the du-upgrade.yaml file. Example of PolicyGenTemplate for platform update apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: "platform-upgrade-prep" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: "platform-upgrade-prep" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: "platform-upgrade" metadata: name: version spec: channel: "stable-4.14" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.14 desiredUpdate: version: 4.14.4 status: history: - version: 4.14.4 state: "Completed" 1 The ConfigMap CR contains the signature of the desired release image to update to. 2 Shows the image signature of the desired OpenShift Container Platform release. Get the signature from the checksum-USD{OCP_RELEASE_NUMBER}.yaml file you saved when following the procedures in the "Setting up the environment" section. 3 Shows the mirror repository that contains the desired OpenShift Container Platform image. Get the mirrors from the imageContentSources.yaml file that you saved when following the procedures in the "Setting up the environment" section. 4 Shows the ClusterVersion CR to trigger the update. The channel , upstream , and desiredVersion fields are all required for image pre-caching. The PolicyGenTemplate CR generates two policies: The du-upgrade-platform-upgrade-prep policy does the preparation work for the platform update. It creates the ConfigMap CR for the desired release image signature, creates the image content source of the mirrored release image repository, and updates the cluster version with the desired update channel and the update graph reachable by the managed cluster in the disconnected environment. The du-upgrade-platform-upgrade policy is used to perform platform upgrade. Add the du-upgrade.yaml file contents to the kustomization.yaml file located in the GitOps ZTP Git repository for the PolicyGenTemplate CRs and push the changes to the Git repository. ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster. Check the created policies by running the following command: USD oc get policies -A | grep platform-upgrade Create the ClusterGroupUpdate CR for the platform update with the spec.enable field set to false . Save the content of the platform update ClusterGroupUpdate CR with the du-upgrade-platform-upgrade-prep and the du-upgrade-platform-upgrade policies and the target clusters to the cgu-platform-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false Apply the ClusterGroupUpdate CR to the hub cluster by running the following command: USD oc apply -f cgu-platform-upgrade.yml Optional: Pre-cache the images for the platform update. Enable pre-caching in the ClusterGroupUpdate CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the hub cluster: USD oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}' Start the platform update: Enable the cgu-platform-upgrade policy and disable pre-caching by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Additional resources For more information about mirroring the images in a disconnected environment, see Preparing the disconnected environment . 22.12.1.3. Performing an Operator update You can perform an Operator update with the TALM. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update GitOps Zero Touch Provisioning (ZTP) to the latest version. Provision one or more managed clusters with GitOps ZTP. Mirror the desired index image, bundle images, and all Operator images referenced in the bundle images. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Update the PolicyGenTemplate CR for the Operator update. Update the du-upgrade PolicyGenTemplate CR with the following additional contents in the du-upgrade.yaml file: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.14 1 updateStrategy: 2 registryPoll: interval: 1h status: connectionState: lastObservedState: READY 3 1 The index image URL contains the desired Operator images. If the index images are always pushed to the same image name and tag, this change is not needed. 2 Set how frequently the Operator Lifecycle Manager (OLM) polls the index image for new Operator versions with the registryPoll.interval field. This change is not needed if a new index image tag is always pushed for y-stream and z-stream Operator updates. The registryPoll.interval field can be set to a shorter interval to expedite the update, however shorter intervals increase computational load. To counteract this, you can restore registryPoll.interval to the default value once the update is complete. 3 Last observed state of the catalog connection. The READY value ensures that the CatalogSource policy is ready, indicating that the index pod is pulled and is running. This way, TALM upgrades the Operators based on up-to-date policy compliance states. This update generates one policy, du-upgrade-operator-catsrc-policy , to update the redhat-operators-disconnected catalog source with the new index images that contain the desired Operators images. Note If you want to use the image pre-caching for Operators and there are Operators from a different catalog source other than redhat-operators-disconnected , you must perform the following tasks: Prepare a separate catalog source policy with the new index image or registry poll interval update for the different catalog source. Prepare a separate subscription policy for the desired Operators that are from the different catalog source. For example, the desired SRIOV-FEC Operator is available in the certified-operators catalog source. To update the catalog source and the Operator subscription, add the following contents to generate two policies, du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "fec-catsrc-policy" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: "subscriptions-fec-policy" spec: channel: "stable" source: certified-operators Remove the specified subscriptions channels in the common PolicyGenTemplate CR, if they exist. The default subscriptions channels from the GitOps ZTP image are used for the update. Note The default channel for the Operators applied through GitOps ZTP 4.14 is stable , except for the performance-addon-operator . As of OpenShift Container Platform 4.11, the performance-addon-operator functionality was moved to the node-tuning-operator . For the 4.10 release, the default channel for PAO is v4.10 . You can also specify the default channels in the common PolicyGenTemplate CR. Push the PolicyGenTemplate CRs updates to the GitOps ZTP Git repository. ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster. Check the created policies by running the following command: USD oc get policies -A | grep -E "catsrc-policy|subscription" Apply the required catalog source updates before starting the Operator update. Save the content of the ClusterGroupUpgrade CR named operator-upgrade-prep with the catalog source policies and the target managed clusters to the cgu-operator-upgrade-prep.yml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1 Apply the policy to the hub cluster by running the following command: USD oc apply -f cgu-operator-upgrade-prep.yml Monitor the update process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies -A | grep -E "catsrc-policy" Create the ClusterGroupUpgrade CR for the Operator update with the spec.enable field set to false . Save the content of the Operator update ClusterGroupUpgrade CR with the du-upgrade-operator-catsrc-policy policy and the subscription policies created from the common PolicyGenTemplate and the target clusters to the cgu-operator-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false 1 The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source. 2 The policy contains Operator subscriptions. If you have followed the structure and content of the reference PolicyGenTemplates , all Operator subscriptions are grouped into the common-subscriptions-policy policy. Note One ClusterGroupUpgrade CR can only pre-cache the images of the desired Operators defined in the subscription policy from one catalog source included in the ClusterGroupUpgrade CR. If the desired Operators are from different catalog sources, such as in the example of the SRIOV-FEC Operator, another ClusterGroupUpgrade CR must be created with du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy policies for the SRIOV-FEC Operator images pre-caching and update. Apply the ClusterGroupUpgrade CR to the hub cluster by running the following command: USD oc apply -f cgu-operator-upgrade.yml Optional: Pre-cache the images for the Operator update. Before starting image pre-caching, verify the subscription policy is NonCompliant at this point by running the following command: USD oc get policy common-subscriptions-policy -n <policy_namespace> Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d Enable pre-caching in the ClusterGroupUpgrade CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster: USD oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}' Check if the pre-caching is completed before starting the update by running the following command: USD oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq Example output [ { "lastTransitionTime": "2022-03-08T20:49:08.000Z", "message": "The ClusterGroupUpgrade CR is not enabled", "reason": "UpgradeNotStarted", "status": "False", "type": "Ready" }, { "lastTransitionTime": "2022-03-08T20:55:30.000Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingDone" } ] Start the Operator update. Enable the cgu-operator-upgrade ClusterGroupUpgrade CR and disable pre-caching to start the Operator update by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Additional resources For more information about updating GitOps ZTP, see Upgrading GitOps ZTP . Troubleshooting missed Operator updates due to out-of-date policy compliance states . 22.12.1.3.1. Troubleshooting missed Operator updates due to out-of-date policy compliance states In some scenarios, Topology Aware Lifecycle Manager (TALM) might miss Operator updates due to an out-of-date policy compliance state. After a catalog source update, it takes time for the Operator Lifecycle Manager (OLM) to update the subscription status. The status of the subscription policy might continue to show as compliant while TALM decides whether remediation is needed. As a result, the Operator specified in the subscription policy does not get upgraded. To avoid this scenario, add another catalog source configuration to the PolicyGenTemplate and specify this configuration in the subscription for any Operators that require an update. Procedure Add a catalog source configuration in the PolicyGenTemplate resource: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY 1 Update the name for the new configuration. 2 Update the display name for the new configuration. 3 Update the index image URL. This fileName.spec.image field overrides any configuration in the DefaultCatsrc.yaml file. Update the Subscription resource to point to the new configuration for Operators that require an update: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace # ... spec: source: redhat-operators-disconnected-v2 1 # ... 1 Enter the name of the additional catalog source configuration that you defined in the PolicyGenTemplate resource. 22.12.1.4. Performing a platform and an Operator update together You can perform a platform and an Operator update at the same time. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update GitOps Zero Touch Provisioning (ZTP) to the latest version. Provision one or more managed clusters with GitOps ZTP. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Create the PolicyGenTemplate CR for the updates by following the steps described in the "Performing a platform update" and "Performing an Operator update" sections. Apply the prep work for the platform and the Operator update. Save the content of the ClusterGroupUpgrade CR with the policies for platform update preparation work, catalog source updates, and target clusters to the cgu-platform-operator-upgrade-prep.yml file, for example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true Apply the cgu-platform-operator-upgrade-prep.yml file to the hub cluster by running the following command: USD oc apply -f cgu-platform-operator-upgrade-prep.yml Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Create the ClusterGroupUpdate CR for the platform and the Operator update with the spec.enable field set to false . Save the contents of the platform and Operator update ClusterGroupUpdate CR with the policies and the target clusters to the cgu-platform-operator-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false 1 This is the platform update policy. 2 This is the policy containing the catalog source information for the Operators to be updated. It is needed for the pre-caching feature to determine which Operator images to download to the managed cluster. 3 This is the policy to update the Operators. Apply the cgu-platform-operator-upgrade.yml file to the hub cluster by running the following command: USD oc apply -f cgu-platform-operator-upgrade.yml Optional: Pre-cache the images for the platform and the Operator update. Enable pre-caching in the ClusterGroupUpgrade CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster: USD oc get jobs,pods -n openshift-talm-pre-cache Check if the pre-caching is completed before starting the update by running the following command: USD oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}' Start the platform and Operator update. Enable the cgu-du-upgrade ClusterGroupUpgrade CR to start the platform and the Operator update by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Note The CRs for the platform and Operator updates can be created from the beginning by configuring the setting to spec.enable: true . In this case, the update starts immediately after pre-caching completes and there is no need to manually enable the CR. Both pre-caching and the update create extra resources, such as policies, placement bindings, placement rules, managed cluster actions, and managed cluster view, to help complete the procedures. Setting the afterCompletion.deleteObjects field to true deletes all these resources after the updates complete. 22.12.1.5. Removing Performance Addon Operator subscriptions from deployed clusters In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 or later, these functions are part of the Node Tuning Operator. Do not install the Performance Addon Operator on clusters running OpenShift Container Platform 4.11 or later. If you upgrade to OpenShift Container Platform 4.11 or later, the Node Tuning Operator automatically removes the Performance Addon Operator. Note You need to remove any policies that create Performance Addon Operator subscriptions to prevent a re-installation of the Operator. The reference DU profile includes the Performance Addon Operator in the PolicyGenTemplate CR common-ranGen.yaml . To remove the subscription from deployed managed clusters, you must update common-ranGen.yaml . Note If you install Performance Addon Operator 4.10.3-5 or later on OpenShift Container Platform 4.11 or later, the Performance Addon Operator detects the cluster version and automatically hibernates to avoid interfering with the Node Tuning Operator functions. However, to ensure best performance, remove the Performance Addon Operator from your OpenShift Container Platform 4.11 clusters. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for ArgoCD. Update to OpenShift Container Platform 4.11 or later. Log in as a user with cluster-admin privileges. Procedure Change the complianceType to mustnothave for the Performance Addon Operator namespace, Operator group, and subscription in the common-ranGen.yaml file. - fileName: PaoSubscriptionNS.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: "subscriptions-policy" complianceType: mustnothave Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The status of the common-subscriptions-policy policy changes to Non-Compliant . Apply the change to your target clusters by using the Topology Aware Lifecycle Manager. For more information about rolling out configuration changes, see the "Additional resources" section. Monitor the process. When the status of the common-subscriptions-policy policy for a target cluster is Compliant , the Performance Addon Operator has been removed from the cluster. Get the status of the common-subscriptions-policy by running the following command: USD oc get policy -n ztp-common common-subscriptions-policy Delete the Performance Addon Operator namespace, Operator group and subscription CRs from .spec.sourceFiles in the common-ranGen.yaml file. Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The policy remains compliant. 22.12.1.6. Pre-caching user-specified images with TALM on single-node OpenShift clusters You can pre-cache application-specific workload images on single-node OpenShift clusters before upgrading your applications. You can specify the configuration options for the pre-caching jobs using the following custom resources (CR): PreCachingConfig CR ClusterGroupUpgrade CR Note All fields in the PreCachingConfig CR are optional. Example PreCachingConfig CR apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: exampleconfig-ns spec: overrides: 1 platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable spaceRequired: 30 Gi 2 excludePrecachePatterns: 3 - aws - vsphere additionalImages: 4 - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 1 By default, TALM automatically populates the platformImage , operatorsIndexes , and the operatorsPackagesAndChannels fields from the policies of the managed clusters. You can specify values to override the default TALM-derived values for these fields. 2 Specifies the minimum required disk space on the cluster. If unspecified, TALM defines a default value for OpenShift Container Platform images. The disk space field must include an integer value and the storage unit. For example: 40 GiB , 200 MB , 1 TiB . 3 Specifies the images to exclude from pre-caching based on image name matching. 4 Specifies the list of additional images to pre-cache. Example ClusterGroupUpgrade CR with PreCachingConfig CR reference apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu spec: preCaching: true 1 preCachingConfigRef: name: exampleconfig 2 namespace: exampleconfig-ns 3 1 The preCaching field set to true enables the pre-caching job. 2 The preCachingConfigRef.name field specifies the PreCachingConfig CR that you want to use. 3 The preCachingConfigRef.namespace specifies the namespace of the PreCachingConfig CR that you want to use. 22.12.1.6.1. Creating the custom resources for pre-caching You must create the PreCachingConfig CR before or concurrently with the ClusterGroupUpgrade CR. Create the PreCachingConfig CR with the list of additional images you want to pre-cache. apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: default 1 spec: [...] spaceRequired: 30Gi 2 additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 1 The namespace must be accessible to the hub cluster. 2 It is recommended to set the minimum disk space required field to ensure that there is sufficient storage space for the pre-cached images. Create a ClusterGroupUpgrade CR with the preCaching field set to true and specify the PreCachingConfig CR created in the step: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu namespace: default spec: clusters: - sno1 - sno2 preCaching: true preCachingConfigRef: - name: exampleconfig namespace: default managedPolicies: - du-upgrade-platform-upgrade - du-upgrade-operator-catsrc-policy - common-subscriptions-policy remediationStrategy: timeout: 240 Warning Once you install the images on the cluster, you cannot change or delete them. When you want to start pre-caching the images, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f cgu.yaml TALM verifies the ClusterGroupUpgrade CR. From this point, you can continue with the TALM pre-caching workflow. Note All sites are pre-cached concurrently. Verification Check the pre-caching status on the hub cluster where the ClusterUpgradeGroup CR is applied by running the following command: USD oc get cgu <cgu_name> -n <cgu_namespace> -oyaml Example output precaching: spec: platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: - aws - vsphere additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 spaceRequired: "30" status: sno1: Starting sno2: Starting The pre-caching configurations are validated by checking if the managed policies exist. Valid configurations of the ClusterGroupUpgrade and the PreCachingConfig CRs result in the following statuses: Example output of valid CRs - lastTransitionTime: "2023-01-01T00:00:01Z" message: All selected clusters are valid reason: ClusterSelectionCompleted status: "True" type: ClusterSelected - lastTransitionTime: "2023-01-01T00:00:02Z" message: Completed validation reason: ValidationCompleted status: "True" type: Validated - lastTransitionTime: "2023-01-01T00:00:03Z" message: Precaching spec is valid and consistent reason: PrecacheSpecIsWellFormed status: "True" type: PrecacheSpecValid - lastTransitionTime: "2023-01-01T00:00:04Z" message: Precaching in progress for 1 clusters reason: InProgress status: "False" type: PrecachingSucceeded Example of an invalid PreCachingConfig CR Type: "PrecacheSpecValid" Status: False, Reason: "PrecacheSpecIncomplete" Message: "Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io "<pre-caching_cr_name>" not found" You can find the pre-caching job by running the following command on the managed cluster: USD oc get jobs -n openshift-talo-pre-cache Example of pre-caching job in progress NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1s You can check the status of the pod created for the pre-caching job by running the following command: USD oc describe pod pre-cache -n openshift-talo-pre-cache Example of pre-caching job in progress Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1 You can get live updates on the status of the job by running the following command: USD oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache To verify the pre-cache job is successfully completed, run the following command: USD oc describe pod pre-cache -n openshift-talo-pre-cache Example of completed pre-cache job Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completed To verify that the images are successfully pre-cached on the single-node OpenShift, do the following: Enter into the node in debug mode: USD oc debug node/cnfdf00.example.lab Change root to host : USD chroot /host/ Search for the desired images: USD sudo podman images | grep <operator_name> Additional resources For more information about the TALM pre-caching workflow, see Using the container image pre-cache feature . 22.12.2. About the auto-created ClusterGroupUpgrade CR for GitOps ZTP TALM has a controller called ManagedClusterForCGU that monitors the Ready state of the ManagedCluster CRs on the hub cluster and creates the ClusterGroupUpgrade CRs for GitOps Zero Touch Provisioning (ZTP). For any managed cluster in the Ready state without a ztp-done label applied, the ManagedClusterForCGU controller automatically creates a ClusterGroupUpgrade CR in the ztp-install namespace with its associated RHACM policies that are created during the GitOps ZTP process. TALM then remediates the set of configuration policies that are listed in the auto-created ClusterGroupUpgrade CR to push the configuration CRs to the managed cluster. If there are no policies for the managed cluster at the time when the cluster becomes Ready , a ClusterGroupUpgrade CR with no policies is created. Upon completion of the ClusterGroupUpgrade the managed cluster is labeled as ztp-done . If there are policies that you want to apply for that managed cluster, manually create a ClusterGroupUpgrade as a day-2 operation. Example of an auto-created ClusterGroupUpgrade CR for GitOps ZTP apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: "46666836" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: "" 1 deleteClusterLabels: ztp-running: "" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: "" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240 1 Applied to the managed cluster when TALM completes the cluster configuration. 2 Applied to the managed cluster when TALM starts deploying the configuration policies. 22.13. Expanding single-node OpenShift clusters with GitOps ZTP You can expand single-node OpenShift clusters with GitOps Zero Touch Provisioning (ZTP). When you add worker nodes to single-node OpenShift clusters, the original single-node OpenShift cluster retains the control plane node role. Adding worker nodes does not require any downtime for the existing single-node OpenShift cluster. Note Although there is no specified limit on the number of worker nodes that you can add to a single-node OpenShift cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes. If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning MachineConfig objects are rendered and associated with the worker machine config pool before the GitOps ZTP workflow applies the MachineConfig ignition file to the worker node. It is recommended that you first remediate the policies, and then install the worker node. If you create the workload partitioning manifests after installing the worker node, you must drain the node manually and delete all the pods managed by daemon sets. When the managing daemon sets create the new pods, the new pods undergo the workload partitioning process. Important Adding worker nodes to single-node OpenShift clusters with GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources For more information about single-node OpenShift clusters tuned for vDU application deployments, see Reference configuration for deploying vDUs on single-node OpenShift . For more information about worker nodes, see Adding worker nodes to single-node OpenShift clusters . For information about removing a worker node from an expanded single-node OpenShift cluster, see Removing managed cluster nodes by using the command line interface . 22.13.1. Applying profiles to the worker node You can configure the additional worker node with a DU profile. You can apply a RAN distributed unit (DU) profile to the worker node cluster using the GitOps Zero Touch Provisioning (ZTP) common, group, and site-specific PolicyGenTemplate resources. The GitOps ZTP pipeline that is linked to the ArgoCD policies application includes the following CRs that you can find in the out/argocd/example/policygentemplates folder when you extract the ztp-site-generate container: common-ranGen.yaml group-du-sno-ranGen.yaml example-sno-site.yaml ns.yaml kustomization.yaml Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a ClusterGroupUpgrade CR to reconcile the policies in the group of clusters. 22.13.2. (Optional) Ensuring PTP and SR-IOV daemon selector compatibility If the DU profile was deployed using the GitOps Zero Touch Provisioning (ZTP) plugin version 4.11 or earlier, the PTP and SR-IOV Operators might be configured to place the daemons only on nodes labelled as master . This configuration prevents the PTP and SR-IOV daemons from operating on the worker node. If the PTP and SR-IOV daemon node selectors are incorrectly configured on your system, you must change the daemons before proceeding with the worker DU profile configuration. Procedure Check the daemon node selector settings of the PTP Operator on one of the spoke clusters: USD oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq Example output for PTP Operator {"daemonNodeSelector":{"node-role.kubernetes.io/master":""}} 1 1 If the node selector is set to master , the spoke was deployed with the version of the GitOps ZTP plugin that requires changes. Check the daemon node selector settings of the SR-IOV Operator on one of the spoke clusters: USD oc get sriovoperatorconfig/default -n \ openshift-sriov-network-operator -ojsonpath='{.spec}' | jq Example output for SR-IOV Operator {"configDaemonNodeSelector":{"node-role.kubernetes.io/worker":""},"disableDrain":false,"enableInjector":true,"enableOperatorWebhook":true} 1 1 If the node selector is set to master , the spoke was deployed with the version of the GitOps ZTP plugin that requires changes. In the group policy, add the following complianceType and spec entries: spec: - fileName: PtpOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" Important Changing the daemonNodeSelector field causes temporary PTP synchronization loss and SR-IOV connectivity loss. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. 22.13.3. PTP and SR-IOV node selector compatibility The PTP configuration resources and SR-IOV network node policies use node-role.kubernetes.io/master: "" as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the "node-role.kubernetes.io/worker" label. 22.13.4. Using PolicyGenTemplate CRs to apply worker node policies to worker nodes You can create policies for worker nodes. Procedure Create the following policy template: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-sno-workers" namespace: "example-sno" spec: bindingRules: sites: "example-sno" 1 mcp: "worker" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: "config-policy" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: "4-47" reserved: "0-3" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: "config-policy" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker 1 The policies are applied to all clusters with this label. 2 The MCP field must be set to worker . 3 This generic MachineConfig CR is used to configure workload partitioning on the worker node. 4 The cpu.isolated and cpu.reserved fields must be configured for each particular hardware platform. 5 The cmdline_crash CPU set must match the cpu.isolated set in the PerformanceProfile section. A generic MachineConfig CR is used to configure workload partitioning on the worker node. You can generate the content of crio and kubelet configuration files. Add the created policy template to the Git repository monitored by the ArgoCD policies application. Add the policy in the kustomization.yaml file. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. To remediate the new policies to your spoke cluster, create a TALM custom resource: USD cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF 22.13.5. Adding worker nodes to single-node OpenShift clusters with GitOps ZTP You can add one or more worker nodes to existing single-node OpenShift clusters to increase available CPU resources in the cluster. Prerequisites Install and configure RHACM 2.6 or later in an OpenShift Container Platform 4.11 or later bare-metal hub cluster Install Topology Aware Lifecycle Manager in the hub cluster Install Red Hat OpenShift GitOps in the hub cluster Use the GitOps ZTP ztp-site-generate container image version 4.12 or later Deploy a managed single-node OpenShift cluster with GitOps ZTP Configure the Central Infrastructure Management as described in the RHACM documentation Configure the DNS serving the cluster to resolve the internal API endpoint api-int.<cluster_name>.<base_domain> Procedure If you deployed your cluster by using the example-sno.yaml SiteConfig manifest, add your new worker node to the spec.clusters['example-sno'].nodes list: nodes: - hostName: "example-node2.example.com" role: "worker" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node2-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up macAddress: "AA:BB:CC:DD:EE:11" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Create a BMC authentication secret for the new host, as referenced by the bmcCredentialsName field in the spec.nodes section of your SiteConfig file: apiVersion: v1 data: password: "password" username: "username" kind: Secret metadata: name: "example-node2-bmh-secret" namespace: example-sno type: Opaque Commit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application. When the ArgoCD cluster application synchronizes, two new manifests appear on the hub cluster generated by the GitOps ZTP plugin: BareMetalHost NMStateConfig Important The cpuset field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete. Verification You can monitor the installation process in several ways. Check if the preprovisioning images are created by running the following command: USD oc get ppimg -n example-sno Example output NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated Check the state of the bare-metal hosts: USD oc get bmh -n example-sno Example output NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1 1 The provisioning state indicates that node booting from the installation media is in progress. Continuously monitor the installation process: Watch the agent install process by running the following command: USD oc get agent -n example-sno --watch Example output NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done When the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the ManagedClusterInfo status. Run the following command to see the status: USD oc get managedclusterinfo/example-sno -n example-sno -o \ jsonpath='{range .status.nodeList[*]}{.name}{"\t"}{.conditions}{"\t"}{.labels}{"\n"}{end}' Example output example-sno [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/master":"","node-role.kubernetes.io/worker":""} example-node2 [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/worker":""} 22.14. Pre-caching images for single-node OpenShift deployments In environments with limited bandwidth where you use the GitOps Zero Touch Provisioning (ZTP) solution to deploy a large number of clusters, you want to avoid downloading all the images that are required for bootstrapping and installing OpenShift Container Platform. The limited bandwidth at remote single-node OpenShift sites can cause long deployment times. The factory-precaching-cli tool allows you to pre-stage servers before shipping them to the remote site for ZTP provisioning. The factory-precaching-cli tool does the following: Downloads the RHCOS rootfs image that is required by the minimal ISO to boot. Creates a partition from the installation disk labelled as data . Formats the disk in xfs. Creates a GUID Partition Table (GPT) data partition at the end of the disk, where the size of the partition is configurable by the tool. Copies the container images required to install OpenShift Container Platform. Copies the container images required by ZTP to install OpenShift Container Platform. Optional: Copies Day-2 Operators to the partition. Important The factory-precaching-cli tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 22.14.1. Getting the factory-precaching-cli tool The factory-precaching-cli tool Go binary is publicly available in the Telco RAN distributed unit (DU) tools container image . The factory-precaching-cli tool Go binary in the container image is executed on the server running an RHCOS live image using podman . If you are working in a disconnected environment or have a private registry, you need to copy the image there so you can download the image to the server. Procedure Pull the factory-precaching-cli tool image by running the following command: # podman pull quay.io/openshift-kni/telco-ran-tools:latest Verification To check that the tool is available, query the current version of the factory-precaching-cli tool Go binary: # podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v Example output factory-precaching-cli version 20221018.120852+main.feecf17 22.14.2. Booting from a live operating system image You can use the factory-precaching-cli tool with to boot servers where only one disk is available and external disk drive cannot be attached to the server. Warning RHCOS requires the disk to not be in use when the disk is about to be written with an RHCOS image. Depending on the server hardware, you can mount the RHCOS live ISO on the blank server using one of the following methods: Using the Dell RACADM tool on a Dell server. Using the HPONCFG tool on a HP server. Using the Redfish BMC API. Note It is recommended to automate the mounting procedure. To automate the procedure, you need to pull the required images and host them on a local HTTP server. Prerequisites You powered up the host. You have network connectivity to the host. Procedure This example procedure uses the Redfish BMC API to mount the RHCOS live ISO. Mount the RHCOS live ISO: Check virtual media status: USD curl --globoff -H "Content-Type: application/json" -H \ "Accept: application/json" -k -X GET --user USD{username_password} \ https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool Mount the ISO file as a virtual media: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Image": "http://[USDHTTPd_IP]/RHCOS-live.iso"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia Set the boot order to boot from the virtual media once: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Boot":{ "BootSourceOverrideEnabled": "Once", "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self Reboot and ensure that the server is booting from virtual media. Additional resources For more information about the butane utility, see About Butane . For more information about creating a custom live RHCOS ISO, see Creating a custom live RHCOS ISO for remote server access . For more information about using the Dell RACADM tool, see Integrated Dell Remote Access Controller 9 RACADM CLI Guide . For more information about using the HP HPONCFG tool, see Using HPONCFG . For more information about using the Redfish BMC API, see Booting from an HTTP-hosted ISO image using the Redfish API . 22.14.3. Partitioning the disk To run the full pre-caching process, you have to boot from a live ISO and use the factory-precaching-cli tool from a container image to partition and pre-cache all the artifacts required. A live ISO or RHCOS live ISO is required because the disk must not be in use when the operating system (RHCOS) is written to the device during the provisioning. Single-disk servers can also be enabled with this procedure. Prerequisites You have a disk that is not partitioned. You have access to the quay.io/openshift-kni/telco-ran-tools:latest image. You have enough storage to install OpenShift Container Platform and pre-cache the required images. Procedure Verify that the disk is cleared: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk Erase any file system, RAID or partition table signatures from the device: # wipefs -a /dev/nvme0n1 Example output /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa Important The tool fails if the disk is not empty because it uses partition number 1 of the device for pre-caching the artifacts. 22.14.3.1. Creating the partition Once the device is ready, you create a single partition and a GPT partition table. The partition is automatically labelled as data and created at the end of the device. Otherwise, the partition will be overridden by the coreos-installer . Important The coreos-installer requires the partition to be created at the end of the device and to be labelled as data . Both requirements are necessary to save the partition when writing the RHCOS image to the disk. Prerequisites The container must run as privileged due to formatting host devices. You have to mount the /dev folder so that the process can be executed inside the container. Procedure In the following example, the size of the partition is 250 GiB due to allow pre-caching the DU profile for Day 2 Operators. Run the container as privileged and partition the disk: # podman run -v /dev:/dev --privileged \ --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli partition \ 1 -d /dev/nvme0n1 \ 2 -s 250 3 1 Specifies the partitioning function of the factory-precaching-cli tool. 2 Defines the root directory on the disk. 3 Defines the size of the disk in GB. Check the storage information: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part Verification You must verify that the following requirements are met: The device has a GPT partition table The partition uses the latest sectors of the device. The partition is correctly labeled as data . Query the disk status to verify that the disk is partitioned as expected: # gdisk -l /dev/nvme0n1 Example output GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data 22.14.3.2. Mounting the partition After verifying that the disk is partitioned correctly, you can mount the device into /mnt . Important It is recommended to mount the device into /mnt because that mounting point is used during GitOps ZTP preparation. Verify that the partition is formatted as xfs : # lsblk -f /dev/nvme0n1 Example output NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071 Mount the partition: # mount /dev/nvme0n1p1 /mnt/ Verification Check that the partition is mounted: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1 1 The mount point is /var/mnt because the /mnt folder in RHCOS is a link to /var/mnt . 22.14.4. Downloading the images The factory-precaching-cli tool allows you to download the following images to your partitioned server: OpenShift Container Platform images Operator images that are included in the distributed unit (DU) profile for 5G RAN sites Operator images from disconnected registries Note The list of available Operator images can vary in different OpenShift Container Platform releases. 22.14.4.1. Downloading with parallel workers The factory-precaching-cli tool uses parallel workers to download multiple images simultaneously. You can configure the number of workers with the --parallel or -p option. The default number is set to 80% of the available CPUs to the server. Note Your login shell may be restricted to a subset of CPUs, which reduces the CPUs available to the container. To remove this restriction, you can precede your commands with taskset 0xffffffff , for example: # taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help 22.14.4.2. Preparing to download the OpenShift Container Platform images To download OpenShift Container Platform container images, you need to know the multicluster engine version. When you use the --du-profile flag, you also need to specify the Red Hat Advanced Cluster Management (RHACM) version running in the hub cluster that is going to provision the single-node OpenShift. Prerequisites You have RHACM and the multicluster engine Operator installed. You partitioned the storage device. You have enough space for the images on the partitioned device. You connected the bare-metal server to the Internet. You have a valid pull secret. Procedure Check the RHACM version and the multicluster engine version by running the following commands in the hub cluster: USD oc get csv -A | grep -i advanced-cluster-management Example output open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded USD oc get csv -A | grep -i multicluster-engine Example output multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded To access the container registry, copy a valid pull secret on the server to be installed: Create the .docker folder: USD mkdir /root/.docker Copy the valid pull in the config.json file to the previously created .docker/ folder: USD cp config.json /root/.docker/config.json 1 1 /root/.docker/config.json is the default path where podman checks for the login credentials for the registry. Note If you use a different registry to pull the required artifacts, you need to copy the proper pull secret. If the local registry uses TLS, you need to include the certificates from the registry as well. 22.14.4.3. Downloading the OpenShift Container Platform images The factory-precaching-cli tool allows you to pre-cache all the container images required to provision a specific OpenShift Container Platform release. Procedure Pre-cache the release by running the following command: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- \ factory-precaching-cli download \ 1 -r 4.14.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf ... Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f ... Summary: Release: 4.14.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83 Verification Check that all the images are compressed in the target folder of server: USD ls -l /mnt 1 1 It is recommended that you pre-cache the images in the /mnt folder. Example output -rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz 22.14.4.4. Downloading the Operator images You can also pre-cache Day-2 Operators used in the 5G Radio Access Network (RAN) Distributed Unit (DU) cluster configuration. The Day-2 Operators depend on the installed OpenShift Container Platform version. Important You need to include the RHACM hub and multicluster engine Operator versions by using the --acm-version and --mce-version flags so the factory-precaching-cli tool can pre-cache the appropriate containers images for RHACM and the multicluster engine Operator. Procedure Pre-cache the Operator images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.14.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s 7 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 ... Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 ... Summary: Release: 4.14.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83 22.14.4.5. Pre-caching custom images in disconnected environments The --generate-imageset argument stops the factory-precaching-cli tool after the ImageSetConfiguration custom resource (CR) is generated. This allows you to customize the ImageSetConfiguration CR before downloading any images. After you customized the CR, you can use the --skip-imageset argument to download the images that you specified in the ImageSetConfiguration CR. You can customize the ImageSetConfiguration CR in the following ways: Add Operators and additional images Remove Operators and additional images Change Operator and catalog sources to local or disconnected registries Procedure Pre-cache the images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.14.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --generate-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --generate-imageset argument generates the ImageSetConfiguration CR only, which allows you to customize the CR. Example output Generated /mnt/imageset.yaml Example ImageSetConfiguration CR apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.14 minVersion: 4.14.0 1 maxVersion: 4.14.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.14' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.14 packages: - name: sriov-fec 11 channels: - name: 'stable' 1 The platform versions match the versions passed to the tool. 2 3 The versions of RHACM and the multicluster engine Operator match the versions passed to the tool. 4 5 6 7 8 9 10 11 The CR contains all the specified DU Operators. Customize the catalog resource in the CR: apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.14 packages: - name: sriov-fec channels: - name: 'stable' When you download images by using a local or disconnected registry, you have to first add certificates for the registries that you want to pull the content from. To avoid any errors, copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Then, update the certificates trust store: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download \ 1 -r 4.14.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --skip-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --skip-imageset argument allows you to download the images that you specified in your customized ImageSetConfiguration CR. Download the images without generating a new imageSetConfiguration CR: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.14.0 \ --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt \ --img quay.io/custom/repository \ --du-profile -s \ --skip-imageset Additional resources To access the online Red Hat registries, see OpenShift installation customization tools . For more information about using the multicluster engine, see About cluster lifecycle with the multicluster engine operator . 22.14.5. Pre-caching images in GitOps ZTP The SiteConfig manifest defines how an OpenShift cluster is to be installed and configured. In the GitOps Zero Touch Provisioning (ZTP) provisioning workflow, the factory-precaching-cli tool requires the following additional fields in the SiteConfig manifest: clusters.ignitionConfigOverride nodes.installerArgs nodes.ignitionConfigOverride Example SiteConfig with additional fields apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-5g-lab" namespace: "example-5g-lab" spec: baseDomain: "example.domain.redhat.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "img4.9.10-x86-64-appsub" 1 sshPublicKey: "ssh-rsa ..." clusters: - clusterName: "sno-worker-0" clusterImageSetNameRef: "eko4-img4.11.5-x86-64-appsub" 2 clusterLabels: group-du-sno: "" common-411: true sites : "example-5g-lab" vendor: "OpenShift" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: "OVNKubernetes" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-images.service\nBindsTo=precache-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-images.service" }, { "name": "precache-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached images in discovery stage\nAfter=var-mnt.mount\nBefore=agent.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ai.sh\n#TimeoutStopSec=30\n\n[Install]\nWantedBy=multi-user.target default.target\nWantedBy=agent.service" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ai.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200" } }, { "overwrite": true, "path": "/usr/local/bin/agent-fix-bz1964591", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true" } } ] } }' nodes: - hostName: "snonode.sno-worker-0.example.domain.redhat.com" role: "master" bmcAddress: "idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "worker0-bmh-secret" bootMACAddress: "e4:43:4b:bd:90:46" bootMode: "UEFI" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '["--save-partlabel", "data"]' ignitionConfigOverride: | { "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-ocp-images.service\nBindsTo=precache-ocp-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-ocp-images.service" }, { "name": "precache-ocp-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached OCP images into containers storage\nAfter=var-mnt.mount\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ocp.sh\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ocp.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: "AA:BB:CC:11:22:33" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "ens1f0" macAddress: "AA:BB:CC:11:22:33" 1 Specifies the cluster image set used for deployment, unless you specify a different image set in the spec.clusters.clusterImageSetNameRef field. 2 Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at the site level. 22.14.5.1. Understanding the clusters.ignitionConfigOverride field The clusters.ignitionConfigOverride field adds a configuration in Ignition format during the GitOps ZTP discovery stage. The configuration includes systemd services in the ISO mounted in virtual media. This way, the scripts are part of the discovery RHCOS live ISO and they can be used to load the Assisted Installer (AI) images. systemd services The systemd services are var-mnt.mount and precache-images.services . The precache-images.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The service calls a script called extract-ai.sh . extract-ai.sh The extract-ai.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. agent-fix-bz1964591 The agent-fix-bz1964591 script is a workaround for an AI issue. To prevent AI from removing the images, which can force the agent.service to pull the images again from the registry, the agent-fix-bz1964591 script checks if the requested container images exist. 22.14.5.2. Understanding the nodes.installerArgs field The nodes.installerArgs field allows you to configure how the coreos-installer utility writes the RHCOS live ISO to disk. You need to indicate to save the disk partition labeled as data because the artifacts saved in the data partition are needed during the OpenShift Container Platform installation stage. The extra parameters are passed directly to the coreos-installer utility that writes the live RHCOS to disk. On the reboot, the operating system starts from the disk. You can pass several options to the coreos-installer utility: OPTIONS: ... -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL ... --save-partlabel <lx>... Save partitions with this label glob --save-partindex <id>... Save partitions with this number or range ... --insecure-ignition Allow Ignition URL without HTTPS or hash 22.14.5.3. Understanding the nodes.ignitionConfigOverride field Similarly to clusters.ignitionConfigOverride , the nodes.ignitionConfigOverride field allows the addition of configurations in Ignition format to the coreos-installer utility, but at the OpenShift Container Platform installation stage. When the RHCOS is written to disk, the extra configuration included in the GitOps ZTP discovery ISO is no longer available. During the discovery stage, the extra configuration is stored in the memory of the live OS. Note At this stage, the number of container images extracted and loaded is bigger than in the discovery stage. Depending on the OpenShift Container Platform release and whether you install the Day-2 Operators, the installation time can vary. At the installation stage, the var-mnt.mount and precache-ocp.services systemd services are used. precache-ocp.service The precache-ocp.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The precache-ocp.service service calls a script called extract-ocp.sh . Important To extract all the images before the OpenShift Container Platform installation, you must execute precache-ocp.service before executing the machine-config-daemon-pull.service and nodeip-configuration.service services. extract-ocp.sh The extract-ocp.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. When you upload the SiteConfig and the optional PolicyGenTemplates custom resources (CRs) to the Git repo, which Argo CD is monitoring, you can start the GitOps ZTP workflow by syncing the CRs with the hub cluster. 22.14.6. Troubleshooting 22.14.6.1. Rendered catalog is invalid When you download images by using a local or disconnected registry, you might see the The rendered catalog is invalid error. This means that you are missing certificates of the new registry you want to pull content from. Note The factory-precaching-cli tool image is built on a UBI RHEL image. Certificate paths and locations are the same on RHCOS. Example error Generating list of pre-cached artifacts... error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying host error=failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information. error: error rendering new refs: render reference "eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11": error resolving name : failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority Procedure Copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Update the certificates truststore: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download -r 4.14.0 --acm-version 2.5.4 \ --mce-version 2.0.4 -f /mnt \--img quay.io/custom/repository --du-profile -s --skip-imageset | [
"export ISO_IMAGE_NAME=<iso_image_name> 1",
"export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1",
"export OCP_VERSION=<ocp_version> 1",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}",
"wget http://USD(hostname)/USD{ISO_IMAGE_NAME}",
"Saving to: rhcos-4.14.1-x86_64-live.x86_64.iso rhcos-4.14.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s",
"oc edit AgentServiceConfig",
"- cpuArchitecture: x86_64 openshiftVersion: \"4.14\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso",
"apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/example-repository\" 4 mirror-by-digest-only = true [[registry.mirror]] location = \"mirror1.registry.corp.com:5000/example-repository\" 5",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> 3 url: <iso_url> 4",
"oc edit AgentServiceConfig agent",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com",
"oc debug node/<node_name>",
"sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry>",
"Login Succeeded!",
"{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json",
"oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json",
"oc apply -k out/argocd/deployment",
"oc -n openshift-gitops get applications.argoproj.io clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq",
"[ \"CreateNamespace=true\", \"PrunePropagationPolicy=background\", \"RespectIgnoreDifferences=true\" ]",
"kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background",
"podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out",
"example/ ├── policygentemplates │ ├── kustomization.yaml │ └── source-crs/ └── siteconfig ├── extra-manifests └── kustomization.yaml",
"example/ ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ ├── source-crs/ │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── extra-manifests/ 1 ├── custom-manifests/ 2 ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml",
"├── policygentemplates │ ├── kustomization.yaml 1 │ ├── version_4.13 2 │ │ ├── common-ranGen.yaml │ │ ├── group-du-sno-ranGen.yaml │ │ ├── group-du-sno-validator-ranGen.yaml │ │ ├── helix56-v413.yaml │ │ ├── kustomization.yaml 3 │ │ ├── ns.yaml │ │ └── source-crs/ 4 │ │ └── reference-crs/ 5 │ │ └── custom-crs/ 6 │ └── version_4.14 7 │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── helix56-v414.yaml │ ├── kustomization.yaml 8 │ ├── ns.yaml │ └── source-crs/ 9 │ └── reference-crs/ 10 │ └── custom-crs/ 11 └── siteconfig ├── kustomization.yaml ├── version_4.13 │ ├── helix56-v413.yaml │ ├── kustomization.yaml │ ├── extra-manifest/ 12 │ └── custom-manifest/ 13 └── version_4.14 ├── helix57-v414.yaml ├── kustomization.yaml ├── extra-manifest/ 14 └── custom-manifest/ 15",
"extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2",
"resources: - version_4.13 1 #- version_4.14 2",
"mkdir -p ./update",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./update",
"oc get managedcluster -l 'local-cluster!=true'",
"oc label managedcluster -l 'local-cluster!=true' ztp-done=",
"oc delete -f update/argocd/deployment/clusters-app.yaml",
"oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge",
"oc delete -f update/argocd/deployment/policies-app.yaml",
"├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml",
"{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json",
"oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json",
"oc apply -k out/argocd/deployment",
"grep -r \"ztp-deploy-wave\" out/source-crs",
"apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"",
"~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml",
"clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"",
"ssh -i /path/to/privatekey core@<host_name>",
"cat /proc/cmdline",
"export CLUSTERNS=example-sno",
"oc create namespace USDCLUSTERNS",
"example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"oc describe node example-node.example.com",
"Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos",
"export CLUSTER=<clusterName>",
"oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq",
"curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'",
"oc get AgentClusterInstall -n <cluster_name>",
"oc get managedcluster",
"oc get applications.argoproj.io -n openshift-gitops clusters -o yaml",
"syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the \"SiteConfig\" CRD is installed on the destination cluster",
"siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory",
"Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown",
"oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'",
"kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background",
"oc delete policy -n <namespace> <policy_name>",
"oc delete -k out/argocd/deployment",
"--- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"common\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: SriovOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: PtpOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogNS.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogSubscription.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: StorageNS.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: StorageSubscription.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ReduceMonitoringFootprint.yaml policyName: \"config-policy\" - fileName: OperatorHub.yaml 3 policyName: \"config-policy\" - fileName: DefaultCatsrc.yaml 4 policyName: \"config-policy\" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: \"config-policy\" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"ztp-group\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -n 24\"",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..",
"export CLUSTER=<clusterName>",
"oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq",
"{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" }",
"oc get policies -n USDCLUSTER",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m",
"export NS=<namespace>",
"oc get policy -n USDNS",
"oc describe -n openshift-gitops application policies",
"Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError",
"Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error",
"oc get policy -n USDCLUSTER",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d",
"oc get placementrule -n USDNS",
"oc get placementrule -n USDNS <placementRuleName> -o yaml",
"oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq",
"oc get policy -n USDCLUSTER",
"export CLUSTER=<clusterName>",
"oc get clustergroupupgrades -n ztp-install USDCLUSTER",
"oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'",
"oc delete clustergroupupgrades -n ztp-install USDCLUSTER",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true",
"- fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:",
"oc create -f cgu-remove.yaml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove --patch '{\"spec\":{\"enable\":true}}' --type=merge",
"oc get <kind> <changed_cr_name>",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h",
"oc get <kind> <changed_cr_name>",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out",
"out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml",
"mkdir -p ./site-install",
"example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator install site-1-sno.yaml /output",
"site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml",
"mkdir -p ./site-machineconfig",
"podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator install -E site-1-sno.yaml /output",
"site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml",
"mkdir -p ./ref",
"podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 generator config -N . /output",
"ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs",
"oc describe node example-node.example.com",
"Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos",
"apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret",
"ssh -i /path/to/privatekey core@<host_name>",
"cat /proc/cmdline",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.14.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64 2",
"oc apply -f clusterImageSet-4.14.yaml",
"apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2",
"oc apply -f cluster-namespace.yaml",
"oc apply -R ./site-install/site-sno-1",
"oc get managedcluster",
"oc get agent -n <cluster_name>",
"oc describe agent -n <cluster_name>",
"oc get agentclusterinstall -n <cluster_name>",
"oc describe agentclusterinstall -n <cluster_name>",
"oc get managedclusteraddon -n <cluster_name>",
"oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig",
"oc get managedcluster",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h",
"oc get clusterdeployment -n <cluster_name>",
"NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h",
"oc describe agentclusterinstall -n <cluster_name> <cluster_name>",
"oc delete managedcluster <cluster_name>",
"oc delete namespace <cluster_name>",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" cpuPartitioningMode: AllNodes 1",
"oc debug node/example-sno-1",
"sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done",
"pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53",
"sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done",
"pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: logs: type: \"vector\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #GNSS module s /dev/ttyGNSS* -al use _0 #cat /dev/ttyGNSS_1700_0 to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: true enableOperatorWebhook: true logLevel: 0",
"containers: - name: my-sriov-workload-container resources: limits: openshift.io/<resource_name>: \"1\" requests: openshift.io/<resource_name>: \"1\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt",
"installConfigOverrides: \"{\\\"capabilities\\\":{\\\"baselineCapabilitySet\\\": \\\"None\\\" }}\"",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h",
"apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: odf-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /usr/disk/by-path/pci-0000:11:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true",
"spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"module_blacklist=irdma\"",
"spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name, for example: # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from the [sysctl] section data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable",
"OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}')",
"DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64)",
"podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'",
"4.18.0-305.49.1.rt7.121.el8_4.x86_64",
"oc debug node/<node_name>",
"sh-4.4# uname -r",
"4.18.0-305.49.1.rt7.121.el8_4.x86_64",
"oc get operatorhub cluster -o yaml",
"spec: disableAllDefaultSources: true",
"oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.target\\.workload\\.openshift\\.io/management}{\"\\n\"}{end}'",
"certified-operators -- {\"effect\": \"PreferredDuringScheduling\"} community-operators -- {\"effect\": \"PreferredDuringScheduling\"} ran-operators 1 redhat-marketplace -- {\"effect\": \"PreferredDuringScheduling\"} redhat-operators -- {\"effect\": \"PreferredDuringScheduling\"}",
"oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.workload\\.openshift\\.io/allowed}{\"\\n\"}{end}'",
"default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management",
"oc get -n openshift-logging ClusterLogForwarder instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: \"2022-07-19T21:51:41Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"1030342\" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open",
"oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: \"2022-07-07T18:22:56Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"235796\" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed",
"oc get consoles.operator.openshift.io cluster -o jsonpath=\"{ .spec.managementState }\"",
"Removed",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# systemctl status chronyd",
"● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)",
"PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)",
"oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'",
"sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2",
"oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'",
"sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00",
"oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container",
"phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533",
"oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath=\"{.spec.disableDrain}{'\\n'}\"",
"true",
"oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath=\"{.items[*].status.syncStatus}{'\\n'}\"",
"Succeeded",
"oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml",
"apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState status: interfaces: - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: \"8086\" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: \"8086\" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: \"8086\" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: \"8086\" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: \"8086\" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: \"8086\" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: \"8086\" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: \"8086\" vfID: 7",
"oc get PerformanceProfile openshift-node-performance-profile -o yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: \"2022-07-19T21:51:31Z\" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: \"33558\" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Available - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Upgradeable - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Progressing - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile",
"oc get performanceprofile openshift-node-performance-profile -o jsonpath=\"{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\\n'}{end}\"",
"Available -- True Upgradeable -- True Progressing -- False Degraded -- False",
"oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: \"2022-07-18T10:33:52Z\" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: \"34024\" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch",
"oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'",
"true",
"oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION",
"Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\"",
"oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath=\"{ .data.config\\.yaml }\"",
"grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h",
"oc get route -n openshift-monitoring alertmanager-main",
"oc get route -n openshift-monitoring grafana",
"oc get performanceprofile -o jsonpath=\"{ .items[0].spec.cpu.reserved }\"",
"0-3",
"siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml ├── extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml",
"clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.14\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml",
"- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude",
"clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml",
"siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: nodes: - hostname: node6 role: \"worker\" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true",
"get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations[\"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete\"]'",
"true",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: - nodes: - hostName: node6 role: \"worker\" crSuppression: - BareMetalHost",
"oc get bmh -n <cluster-ns>",
"oc get agent -n <cluster-ns>",
"oc get nodes",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14.1 extract /home/ztp --tar | tar x -C ./out",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false",
"--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true",
"spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"",
"example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-dev\" namespace: \"ztp-clusters\" spec: bindingRules: dev: \"true\" mcp: \"master\" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: \"group-dev-cluster-log-ns\" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: \"group-dev-cluster-log-operator-group\" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: \"group-dev-cluster-log-sub\" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: \"group-dev-lso-ns\" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: \"group-dev-lso-operator-group\" - fileName: StorageSubscription.yaml remediationAction: inform policyName: \"group-dev-lso-sub\" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: \"group-dev-pao-ns\" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: \"group-dev-pao-cat-source\" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: \"group-dev-pao-sub\" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: \"group-dev-elasticsearch-ns\" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: \"group-dev-elasticsearch-operator-group\" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: \"group-dev-apiserver-config\" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: \"group-dev-disable-nic-lldp\"",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f cgu-test.yaml",
"oc get cgu -A",
"NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies",
"spec: evaluationInterval: compliant: 30m noncompliant: 20s",
"spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: \"sriov-sub-policy\" evaluationInterval: compliant: never noncompliant: 10s",
"oc get pods -n open-cluster-management-agent-addon",
"NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d",
"oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb",
"2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - \"cpufreq.default_governor=schedutil\" 1",
"oc get nodes",
"oc debug node/<node-name>",
"chroot /host",
"cat /proc/cmdline",
"- fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1",
"- fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.14 policyName: subscription-policies",
"- fileName: StorageLVMSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMSubscription.yaml policyName: subscription-policies",
"- fileName: StorageLVMCluster.yaml policyName: \"lvms-config\" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10",
"- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043",
"- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs",
"#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: \"amqp://amq-router.amq-router.svc.cluster.local\"",
"- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs",
"- fileName: AmqInstance.yaml policyName: \"config-policy\"",
"Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: HardwareEvent.yaml 1 policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043\" logLevel: \"info\"",
"oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"",
"AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: AmqInstance.yaml policyName: \"config-policy\"",
"- fileName: HardwareEvent.yaml policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"",
"oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"",
"variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota",
"butane storage.bu",
"{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}",
"[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]",
"oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]",
"\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"",
"oc debug node/my-sno-node",
"chroot /host",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers",
"df -h",
"Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000",
"sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"",
"cluster=<managed_cluster_name>",
"oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster",
"oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster",
"oc get image.config.openshift.io cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice",
"oc get pv image-registry-sc",
"oc get pods -n openshift-image-registry | grep registry*",
"cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d",
"oc debug node/sno-1.example.com",
"sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom",
"argocd.argoproj.io/sync-options: Replace=true",
"{{hub fromConfigMap \"default\" \"test-config\" \"common-key\" hub}}",
"{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) hub}}",
"{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) | toBool hub}}",
"{{hub (printf \"%s-name\" .ManagedClusterName) | fromConfigMap \"default\" \"test-config\" | toInt hub}}",
"apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: \"8\" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: \"10\" example-sno-du_fh-vlan: \"140\" example-sno-du_mh-numVfs: \"8\" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: \"10\" example-sno-du_mh-vlan: \"150\"",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"site\" namespace: \"ztp-site\" spec: remediationAction: inform bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-pf\" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-pf\" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_mh",
"apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: \"101\" site-2-vlan: \"234\"",
"- fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data\" (printf \"%s-vlan\" .ManagedClusterName) | toInt hub}}'",
"oc delete policy <policy_name> -n <policy_namespace>",
"oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"",
"oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f cgr-example.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f talm-subscription.yaml",
"oc get csv -n openshift-operators",
"NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.14.x Topology Aware Lifecycle Manager 4.14.x Succeeded",
"oc get deploy -n openshift-operators",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s",
"spec remediationStrategy: maxConcurrency: 1 timeout: 240",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: \"\" deleteClusterLabels: upgrade-running: \"\" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: \"\" backup: false clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status:",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: \"False\" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: \"True\" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}",
"oc apply -f <name>.yaml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.14.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.14 desiredUpdate: version: 4.4.14.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.14.4 remediationAction: inform severity: low remediationAction: inform",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5",
"oc create -f cgu-1.yaml",
"oc get cgu --all-namespaces",
"NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Not enabled\", 1 \"reason\": \"NotEnabled\", \"status\": \"False\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }",
"oc get policies -A",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"All selected clusters are valid\", \"reason\": \"ClusterSelectionCompleted\", \"status\": \"True\", \"type\": \"ClustersSelected\", \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"Completed validation\", \"reason\": \"ValidationCompleted\", \"status\": \"True\", \"type\": \"Validated\", \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"remediationPlanForBatch\": { \"spoke1\": 0, \"spoke2\": 1 }, \"startedAt\": \"2022-02-25T15:54:16Z\" } }",
"export KUBECONFIG=<cluster_kubeconfig_absolute_path>",
"oc get subs -A | grep -i <subscription_name>",
"NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.14.5 True True 43s Working towards 4.4.14.7: 71 of 735 done (9% complete)",
"oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"",
"oc get installplan -n <subscription_namespace>",
"NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1",
"oc get csv -n <operator_namespace>",
"NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded",
"nodes: - hostName: \"node-1.example.com\" role: \"master\" rootDeviceHints: hctl: \"0:2:0:0\" deviceName: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 #Disk /dev/disk/by-id/scsi-3600508b400105e210000900000490000: #893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - mount_point: /var/recovery size: 51200 start: 800000",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f clustergroupupgrades-group-du.yaml",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"{ \"backup\": { \"clusters\": [ \"cnfdb2\", \"cnfdb1\" ], \"status\": { \"cnfdb1\": \"Succeeded\", \"cnfdb2\": \"Failed\" 1 } }, \"computedMaxConcurrency\": 1, \"conditions\": [ { \"lastTransitionTime\": \"2022-04-05T10:37:19Z\", \"message\": \"Backup failed for 1 cluster\", 2 \"reason\": \"PartiallyDone\", 3 \"status\": \"True\", 4 \"type\": \"Succeeded\" } ], \"precaching\": { \"spec\": {} }, \"status\": {}",
"oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno",
"ostree admin status",
"ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9",
"ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca",
"rpm-ostree rollback -r",
"/var/recovery/upgrade-recovery.sh",
"systemctl reboot",
"/var/recovery/upgrade-recovery.sh --resume",
"/var/recovery/upgrade-recovery.sh --restart",
"oc get clusterversion,nodes,clusteroperator",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.14.23 True False 86d Cluster version is 4.4.14.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.14.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.14.23 True False False 86d ...........",
"oc adm release info <ocp-version>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f clustergroupupgrades-group-du.yaml",
"oc get cgu -A",
"NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"InProgress\", \"status\": \"False\", \"type\": \"PrecachingSucceeded\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 1 \"cnfdb2\" ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\" \"cnfdb2\": \"Succeeded\"} } }",
"oc get jobs,pods -n openshift-talo-pre-cache",
"NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingSucceeded\" 1 }",
"oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>",
"oc apply -f <ClusterGroupUpgradeCR_YAML>",
"oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'",
"[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h",
"oc get pod -n openshift-operators",
"NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2",
"oc get managedcluster --selector=upgrade=true 1",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true",
"oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'",
"[\"spoke1\", \"spoke3\"]",
"oc get managedcluster --selector=upgrade=true",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"oc get jobs,pods -n openshift-talo-pre-cache",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'",
"{\"maxConcurrency\":2, \"timeout\":240}",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'",
"2",
"oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'",
"{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"Missing managed policies:[policyList]\", \"reason\":\"NotAllManagedPoliciesExist\", \"status\":\"False\", \"type\":\"Validated\"}",
"oc get cgu lab-upgrade -oyaml",
"status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default",
"oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'",
"[[\"spoke2\", \"spoke3\"]]",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc get pods -n openshift-talo-pre-cache",
"oc logs -n openshift-talo-pre-cache <pod name>",
"oc describe pod -n openshift-talo-pre-cache <pod name>",
"oc describe job -n openshift-talo-pre-cache pre-cache",
"imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"OCP_RELEASE_NUMBER=<release_version>",
"ARCHITECTURE=<cluster_architecture> 1",
"DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"",
"DIGEST_ALGO=\"USD{DIGEST%%:*}\"",
"DIGEST_ENCODED=\"USD{DIGEST#*:}\"",
"SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)",
"cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF",
"curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.14 -o ~/upgrade-graph_stable-4.14",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: \"platform-upgrade-prep\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: \"platform-upgrade-prep\" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: \"platform-upgrade\" metadata: name: version spec: channel: \"stable-4.14\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.14 desiredUpdate: version: 4.14.4 status: history: - version: 4.14.4 state: \"Completed\"",
"oc get policies -A | grep platform-upgrade",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false",
"oc apply -f cgu-platform-upgrade.yml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge",
"oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge",
"oc get policies --all-namespaces",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.14 1 updateStrategy: 2 registryPoll: interval: 1h status: connectionState: lastObservedState: READY 3",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"fec-catsrc-policy\" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: \"subscriptions-fec-policy\" spec: channel: \"stable\" source: certified-operators",
"oc get policies -A | grep -E \"catsrc-policy|subscription\"",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1",
"oc apply -f cgu-operator-upgrade-prep.yml",
"oc get policies -A | grep -E \"catsrc-policy\"",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false",
"oc apply -f cgu-operator-upgrade.yml",
"oc get policy common-subscriptions-policy -n <policy_namespace>",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge",
"oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'",
"oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq",
"[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge",
"oc get policies --all-namespaces",
"- fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace spec: source: redhat-operators-disconnected-v2 1",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true",
"oc apply -f cgu-platform-operator-upgrade-prep.yml",
"oc get policies --all-namespaces",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false",
"oc apply -f cgu-platform-operator-upgrade.yml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge",
"oc get jobs,pods -n openshift-talm-pre-cache",
"oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge",
"oc get policies --all-namespaces",
"- fileName: PaoSubscriptionNS.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave",
"oc get policy -n ztp-common common-subscriptions-policy",
"apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: exampleconfig-ns spec: overrides: 1 platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable spaceRequired: 30 Gi 2 excludePrecachePatterns: 3 - aws - vsphere additionalImages: 4 - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu spec: preCaching: true 1 preCachingConfigRef: name: exampleconfig 2 namespace: exampleconfig-ns 3",
"apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: default 1 spec: [...] spaceRequired: 30Gi 2 additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu namespace: default spec: clusters: - sno1 - sno2 preCaching: true preCachingConfigRef: - name: exampleconfig namespace: default managedPolicies: - du-upgrade-platform-upgrade - du-upgrade-operator-catsrc-policy - common-subscriptions-policy remediationStrategy: timeout: 240",
"oc apply -f cgu.yaml",
"oc get cgu <cgu_name> -n <cgu_namespace> -oyaml",
"precaching: spec: platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: - aws - vsphere additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 spaceRequired: \"30\" status: sno1: Starting sno2: Starting",
"- lastTransitionTime: \"2023-01-01T00:00:01Z\" message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClusterSelected - lastTransitionTime: \"2023-01-01T00:00:02Z\" message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - lastTransitionTime: \"2023-01-01T00:00:03Z\" message: Precaching spec is valid and consistent reason: PrecacheSpecIsWellFormed status: \"True\" type: PrecacheSpecValid - lastTransitionTime: \"2023-01-01T00:00:04Z\" message: Precaching in progress for 1 clusters reason: InProgress status: \"False\" type: PrecachingSucceeded",
"Type: \"PrecacheSpecValid\" Status: False, Reason: \"PrecacheSpecIncomplete\" Message: \"Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io \"<pre-caching_cr_name>\" not found\"",
"oc get jobs -n openshift-talo-pre-cache",
"NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1s",
"oc describe pod pre-cache -n openshift-talo-pre-cache",
"Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1",
"oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache",
"oc describe pod pre-cache -n openshift-talo-pre-cache",
"Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completed",
"oc debug node/cnfdf00.example.lab",
"chroot /host/",
"sudo podman images | grep <operator_name>",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240",
"oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq",
"{\"daemonNodeSelector\":{\"node-role.kubernetes.io/master\":\"\"}} 1",
"oc get sriovoperatorconfig/default -n openshift-sriov-network-operator -ojsonpath='{.spec}' | jq",
"{\"configDaemonNodeSelector\":{\"node-role.kubernetes.io/worker\":\"\"},\"disableDrain\":false,\"enableInjector\":true,\"enableOperatorWebhook\":true} 1",
"spec: - fileName: PtpOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" - fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-sno-workers\" namespace: \"example-sno\" spec: bindingRules: sites: \"example-sno\" 1 mcp: \"worker\" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: \"config-policy\" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: \"4-47\" reserved: \"0-3\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker",
"cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF",
"nodes: - hostName: \"example-node2.example.com\" role: \"worker\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node2-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"AA:BB:CC:DD:EE:11\" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"apiVersion: v1 data: password: \"password\" username: \"username\" kind: Secret metadata: name: \"example-node2-bmh-secret\" namespace: example-sno type: Opaque",
"oc get ppimg -n example-sno",
"NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated",
"oc get bmh -n example-sno",
"NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1",
"oc get agent -n example-sno --watch",
"NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done",
"oc get managedclusterinfo/example-sno -n example-sno -o jsonpath='{range .status.nodeList[*]}{.name}{\"\\t\"}{.conditions}{\"\\t\"}{.labels}{\"\\n\"}{end}'",
"example-sno [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/master\":\"\",\"node-role.kubernetes.io/worker\":\"\"} example-node2 [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/worker\":\"\"}",
"podman pull quay.io/openshift-kni/telco-ran-tools:latest",
"podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v",
"factory-precaching-cli version 20221018.120852+main.feecf17",
"curl --globoff -H \"Content-Type: application/json\" -H \"Accept: application/json\" -k -X GET --user USD{username_password} https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool",
"curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Image\": \"http://[USDHTTPd_IP]/RHCOS-live.iso\"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia",
"curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Boot\":{ \"BootSourceOverrideEnabled\": \"Once\", \"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk",
"wipefs -a /dev/nvme0n1",
"/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa",
"podman run -v /dev:/dev --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli partition \\ 1 -d /dev/nvme0n1 \\ 2 -s 250 3",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part",
"gdisk -l /dev/nvme0n1",
"GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data",
"lsblk -f /dev/nvme0n1",
"NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071",
"mount /dev/nvme0n1p1 /mnt/",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1",
"taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help",
"oc get csv -A | grep -i advanced-cluster-management",
"open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded",
"oc get csv -A | grep -i multicluster-engine",
"multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded",
"mkdir /root/.docker",
"cp config.json /root/.docker/config.json 1",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6",
"Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f Summary: Release: 4.14.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83",
"ls -l /mnt 1",
"-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s 7",
"Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 Summary: Release: 4.14.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --generate-imageset 8",
"Generated /mnt/imageset.yaml",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.14 minVersion: 4.14.0 1 maxVersion: 4.14.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.14' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.14 packages: - name: sriov-fec 11 channels: - name: 'stable'",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.14 packages: - name: sriov-fec channels: - name: 'stable'",
"cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.",
"update-ca-trust",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.14.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --skip-imageset 8",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.14.0 --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt --img quay.io/custom/repository --du-profile -s --skip-imageset",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-5g-lab\" namespace: \"example-5g-lab\" spec: baseDomain: \"example.domain.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"img4.9.10-x86-64-appsub\" 1 sshPublicKey: \"ssh-rsa ...\" clusters: - clusterName: \"sno-worker-0\" clusterImageSetNameRef: \"eko4-img4.11.5-x86-64-appsub\" 2 clusterLabels: group-du-sno: \"\" common-411: true sites : \"example-5g-lab\" vendor: \"OpenShift\" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: \"OVNKubernetes\" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-images.service\\nBindsTo=precache-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-images.service\" }, { \"name\": \"precache-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached images in discovery stage\\nAfter=var-mnt.mount\\nBefore=agent.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ai.sh\\n#TimeoutStopSec=30\\n\\n[Install]\\nWantedBy=multi-user.target default.target\\nWantedBy=agent.service\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ai.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200\" } }, { \"overwrite\": true, \"path\": \"/usr/local/bin/agent-fix-bz1964591\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true\" } } ] } }' nodes: - hostName: \"snonode.sno-worker-0.example.domain.redhat.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"worker0-bmh-secret\" bootMACAddress: \"e4:43:4b:bd:90:46\" bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '[\"--save-partlabel\", \"data\"]' ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-ocp-images.service\\nBindsTo=precache-ocp-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-ocp-images.service\" }, { \"name\": \"precache-ocp-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached OCP images into containers storage\\nAfter=var-mnt.mount\\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ocp.sh\\nTimeoutStopSec=60\\n\\n[Install]\\nWantedBy=multi-user.target\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ocp.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200\" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: \"AA:BB:CC:11:22:33\" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"ens1f0\" macAddress: \"AA:BB:CC:11:22:33\"",
"OPTIONS: -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL --save-partlabel <lx> Save partitions with this label glob --save-partindex <id> Save partitions with this number or range --insecure-ignition Allow Ignition URL without HTTPS or hash",
"Generating list of pre-cached artifacts error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying next host error=failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run \"oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME\" for more information. error: error rendering new refs: render reference \"eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11\": error resolving name : failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority",
"cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.",
"update-ca-trust",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.14.0 --acm-version 2.5.4 --mce-version 2.0.4 -f /mnt \\--img quay.io/custom/repository --du-profile -s --skip-imageset"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/clusters-at-the-network-far-edge |
9.7. Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later) | 9.7. Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later) It is possible for a cluster to include resources with dependencies that are not themselves managed by the cluster. In this case, you must ensure that those dependencies are started before Pacemaker is started and stopped after Pacemaker is stopped. As of Red Hat Enterprise Linux 7.4, you can configure your startup order to account for this situation by means of the systemd resource-agents-deps target. You can create a systemd drop-in unit for this target and Pacemaker will order itself appropriately relative to this target. For example, if a cluster includes a resource that depends on the external service foo that is not managed by the cluster, you can create the drop-in unit /etc/systemd/system/resource-agents-deps.target.d/foo.conf that contains the following: After creating a drop-in unit, run the systemctl daemon-reload command. A cluster dependency specified in this way can be something other than a service. For example, you may have a dependency on mounting a file system at /srv , in which case you would create a systemd file srv.mount for it according to the systemd documentation, then create a drop-in unit as described here with srv.mount in the .conf file instead of foo.service to make sure that Pacemaker starts after the disk is mounted. | [
"[Unit] Requires=foo.service After=foo.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-nonpacemakerstartup-HAAR |
Chapter 13. Finding more information | Chapter 13. Finding more information The following table includes additional Red Hat documentation for reference: The Red Hat OpenStack Platform documentation suite can be found here: Red Hat OpenStack Platform Documentation Suite Table 13.1. List of Available Documentation Component Reference Red Hat Enterprise Linux Red Hat OpenStack Platform is supported on Red Hat Enterprise Linux 8.0. For information on installing Red Hat Enterprise Linux, see the corresponding installation guide at: Red Hat Enterprise Linux Documentation Suite . Red Hat OpenStack Platform To install OpenStack components and their dependencies, use the Red Hat OpenStack Platform director. The director uses a basic OpenStack installation as the undercloud to install, configure, and manage the OpenStack nodes in the final overcloud. You need one extra host machine for the installation of the undercloud, in addition to the environment necessary for the deployed overcloud. For detailed instructions, see Red Hat OpenStack Platform Director Installation and Usage . For information on configuring advanced features for a Red Hat OpenStack Platform enterprise environment using the Red Hat OpenStack Platform director such as network isolation, storage configuration, SSL communication, and general configuration method, see Advanced Overcloud Customization . NFV Documentation For a high level overview of the NFV concepts, see the Network Functions Virtualization Product Guide . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/network_functions_virtualization_planning_and_configuration_guide/ref-more-information |
Chapter 2. Creating functions | Chapter 2. Creating functions Before you can build and deploy a function, you must create it. You can create functions using the Knative ( kn ) CLI. 2.1. Creating a function by using the Knative CLI You can specify the path, runtime, template, and image registry for a function as flags on the command line, or use the -c flag to start the interactive experience in the terminal. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Create a function project: USD kn func create -r <repository> -l <runtime> -t <template> <path> Accepted runtime values include quarkus , node , typescript , go , python , springboot , and rust . Accepted template values include http and cloudevents . Example command USD kn func create -l typescript -t cloudevents examplefunc Example output Created typescript function in /home/user/demo/examplefunc Alternatively, you can specify a repository that contains a custom template. Example command USD kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc Example output Created node function in /home/user/demo/examplefunc 2.2. Creating a function in the web console You can create a function from a Git repository by using the Developer perspective of the OpenShift Container Platform web console. Prerequisites Before you can create a function by using the web console, a cluster administrator must complete the following steps: Install the OpenShift Serverless Operator and Knative Serving on the cluster. Install the OpenShift Pipelines Operator on the cluster. Create the following pipeline tasks so that they are available for all namespaces on the cluster: func-s2i and func-deploy tasks USD kn func tkn-tasks | oc apply -f - Node.js function USD oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.34/pkg/pipelines/resources/tekton/pipeline/dev-console/0.1/nodejs-pipeline.yaml You must log into the OpenShift Container Platform web console. You must create a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You must create or have access to a Git repository that contains the code for your function. The repository must contain a func.yaml file and use the s2i build strategy. Procedure In the Developer perspective, navigate to +Add Create Serverless function . The Create Serverless function page is displayed. Enter a Git Repo URL that points to the Git repository that contains the code for your function. In the Pipelines section: Select the Build, deploy and configure a Pipeline Repository radio button to create a new pipeline for your function. Select the Use Pipeline from this cluster radio button to connect your function to an existing pipeline in the cluster. Click Create . Verification After you have created a function, you can view it in the Topology view of the Developer perspective. | [
"kn func create -r <repository> -l <runtime> -t <template> <path>",
"kn func create -l typescript -t cloudevents examplefunc",
"Created typescript function in /home/user/demo/examplefunc",
"kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc",
"Created node function in /home/user/demo/examplefunc",
"kn func tkn-tasks | oc apply -f -",
"oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.34/pkg/pipelines/resources/tekton/pipeline/dev-console/0.1/nodejs-pipeline.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/functions/serverless-functions-creating |
Release Notes for Node.js 22 | Release Notes for Node.js 22 Red Hat build of Node.js 22 For use with Node.js 22 LTS Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/release_notes_for_node.js_22/index |
7.4. I/O Mode | 7.4. I/O Mode I/O mode options can be configured on a virtual machine during installation with virt-manager or the virt-install command, or on an existing guest by editing the guest XML configuration. Table 7.2. IO mode options Caching Option Description IO=native The default for Red Hat Enterprise Virtualization environments. This mode uses kernel asynchronous I/O with direct I/O options. IO=threads Sets the I/O mode to host user-mode based threads. IO=default Sets the I/O mode to the kernel default. In Red Hat Enterprise Linux 6, the default is IO=threads. In virt-manager , the I/O mode can be specified under Virtual Disk . For information on using virt-manager to change the I/O mode, see Section 3.4, "Virtual Disk Performance Options" To configure the I/O mode in the guest XML, use virsh edit to edit the io setting inside the driver tag, specifying native , threads , or default . For example, to set the I/O mode to threads : <disk type='file' device='disk'> <driver name='qemu' type='raw' io='threads'/> To configure the I/O mode when installing a guest using virt-install , add the io option to the --disk path parameter. For example, to configure io=threads during guest installation: | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' io='threads'/>",
"virt-install --disk path=/storage/images/USDNAME.img,io=threads,opt2=val2 ."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-blockio-io_mode |
2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System | 2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System This section describes the steps for installing the KVM hypervisor on an existing Red Hat Enterprise Linux 7 system. To install the packages, your machine must be registered and subscribed to the Red Hat Customer Portal. To register using Red Hat Subscription Manager, run the subscription-manager register command and follow the prompts. Alternatively, run the Red Hat Subscription Manager application from Applications System Tools on the desktop to register. If you do not have a valid Red Hat subscription, visit the Red Hat online store to obtain one. For more information on registering and subscribing a system to the Red Hat Customer Portal, see https://access.redhat.com/solutions/253273 . 2.2.1. Installing Virtualization Packages Manually To use virtualization on Red Hat Enterprise Linux, at minimum, you need to install the following packages: qemu-kvm : This package provides the user-level KVM emulator and facilitates communication between hosts and guest virtual machines. qemu-img : This package provides disk management for guest virtual machines. Note The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt : This package provides the server and host-side libraries for interacting with hypervisors and host systems, and the libvirtd daemon that handles the library calls, manages virtual machines, and controls the hypervisor. To install these packages, enter the following command: Several additional virtualization management packages are also available and are recommended when using virtualization: virt-install : This package provides the virt-install command for creating virtual machines from the command line. libvirt-python : This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager : This package provides the virt-manager tool, also known as Virtual Machine Manager . This is a graphical tool for administering virtual machines. It uses the libvirt-client library as the management API. libvirt-client : This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command-line tool to manage and control virtual machines and hypervisors from the command line or a special virtualization shell. You can install all of these recommended virtualization packages with the following command: 2.2.2. Installing Virtualization Package Groups The virtualization packages can also be installed from package groups. You can view the list of available groups by running the yum grouplist hidden commad. Out of the complete list of available package groups, the following table describes the virtualization package groups and what they provide. Table 2.1. Virtualization Package Groups Package Group Description Mandatory Packages Optional Packages Virtualization Hypervisor Smallest possible virtualization host installation libvirt, qemu-kvm, qemu-img qemu-kvm-tools Virtualization Client Clients for installing and managing virtualization instances gnome-boxes, virt-install, virt-manager, virt-viewer, qemu-img virt-top, libguestfs-tools, libguestfs-tools-c Virtualization Platform Provides an interface for accessing and controlling virtual machines and containers libvirt, libvirt-client, virt-who, qemu-img fence-virtd-libvirt, fence-virtd-multicast, fence-virtd-serial, libvirt-cim, libvirt-java, libvirt-snmp, perl-Sys-Virt Virtualization Tools Tools for offline virtual image management libguestfs, qemu-img libguestfs-java, libguestfs-tools, libguestfs-tools-c To install a package group, run the yum group install package_group command. For example, to install the Virtualization Tools package group with all the package types, run: For more information on installing package groups, see How to install a group of packages with yum on Red Hat Enterprise Linux? Knowledgebase article. | [
"yum install qemu-kvm libvirt",
"yum install virt-install libvirt-python virt-manager virt-install libvirt-client",
"yum group install \"Virtualization Tools\" --setopt=group_package_types=mandatory,default,optional"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Installing_the_virtualization_packages-Installing_virtualization_packages_on_an_existing_Red_Hat_Enterprise_Linux_system |
Chapter 10. Verify your deployment | Chapter 10. Verify your deployment After deployment is complete, verify that your deployment has completed successfully. Browse to the Administration Portal, for example, http://engine.example.com/ovirt-engine . Administration Console Login Log in using the administrative credentials added during hosted engine deployment. When login is successful, the Dashboard appears. Administration Console Dashboard Verify that your cluster is available. Administration Console Dashboard - Clusters Verify that at least one host is available. If you provided additional host details during Hosted Engine deployment, 3 hosts are visible here, as shown. Administration Console Dashboard - Hosts Click Compute Hosts . Verify that all hosts are listed with a Status of Up . Administration Console - Hosts Verify that all storage domains are available. Click Storage Domains . Verify that the Active icon is shown in the first column. Administration Console - Storage Domains | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/verify-rhhi-deployment |
Chapter 60. Manipulating Interceptor Chains on the Fly | Chapter 60. Manipulating Interceptor Chains on the Fly Abstract Interceptors can reconfigure an endpoint's interceptor chain as part of its message processing logic. It can add new interceptors, remove interceptors, reorder interceptors, and even suspend the interceptor chain. Any on-the-fly manipulation is invocation-specific, so the original chain is used each time an endpoint is involved in a message exchange. Overview Interceptor chains only live as long as the message exchange that sparked their creation. Each message contains a reference to the interceptor chain responsible for processing it. Developers can use this reference to alter the message's interceptor chain. Because the chain is per-exchange, any changes made to a message's interceptor chain will not effect other message exchanges. Chain life-cycle Interceptor chains and the interceptors in the chain are instantiated on a per-invocation basis. When an endpoint is invoked to participate in a message exchange, the required interceptor chains are instantiated along with instances of its interceptors. When the message exchange that caused the creation of the interceptor chain is completed, the chain and its interceptor instances are destroyed. This means that any changes you make to the interceptor chain or to the fields of an interceptor do not persist across message exchanges. So, if an interceptor places another interceptor in the active chain only the active chain is effected. Any future message exchanges will be created from a pristine state as determined by the endpoint's configuration. It also means that a developer cannot set flags in an interceptor that will alter future message processing. If an interceptor needs to pass information along to future instances, it can set a property in the message context. The context does persist across message exchanges. Getting the interceptor chain The first step in changing a message's interceptor chain is getting the interceptor chain. This is done using the Message.getInterceptorChain() method shown in Example 60.1, "Method for getting an interceptor chain" . The interceptor chain is returned as a org.apache.cxf.interceptor.InterceptorChain object. Example 60.1. Method for getting an interceptor chain InterceptorChain getInterceptorChain Adding interceptors The InterceptorChain object has two methods, shown in Example 60.2, "Methods for adding interceptors to an interceptor chain" , for adding interceptors to an interceptor chain. One allows you to add a single interceptor and the other allows you to add multiple interceptors. Example 60.2. Methods for adding interceptors to an interceptor chain add Interceptor<? extends Message> i add Collection<Interceptor<? extends Message>> i Example 60.3, "Adding an interceptor to an interceptor chain on-the-fly" shows code for adding a single interceptor to a message's interceptor chain. Example 60.3. Adding an interceptor to an interceptor chain on-the-fly The code in Example 60.3, "Adding an interceptor to an interceptor chain on-the-fly" does the following: Instantiates a copy of the interceptor to be added to the chain. Important The interceptor being added to the chain should be in either the same phase as the current interceptor or a latter phase than the current interceptor. Gets the interceptor chain for the current message. Adds the new interceptor to the chain. Removing interceptors The InterceptorChain object has one method, shown in Example 60.4, "Methods for removing interceptors from an interceptor chain" , for removing an interceptor from an interceptor chain. Example 60.4. Methods for removing interceptors from an interceptor chain remove Interceptor<? extends Message> i Example 60.5, "Removing an interceptor from an interceptor chain on-the-fly" shows code for removing an interceptor from a message's interceptor chain. Example 60.5. Removing an interceptor from an interceptor chain on-the-fly Where InterceptorClassName is the class name of the interceptor you want to remove from the chain. | [
"void handleMessage(Message message) { AddledIntereptor addled = new AddledIntereptor(); InterceptorChain chain = message.getInterceptorChain(); chain.add(addled); }",
"void handleMessage(Message message) { Iterator<Interceptor<? extends Message>> iterator = message.getInterceptorChain().iterator(); Interceptor<?> removeInterceptor = null; for (; iterator.hasNext(); ) { Interceptor<?> interceptor = iterator.next(); if (interceptor.getClass().getName().equals(\" InterceptorClassName \")) { removeInterceptor = interceptor; break; } } if (removeInterceptor != null) { log.debug(\"Removing interceptor {}\",removeInterceptor.getClass().getName()); message.getInterceptorChain().remove(removeInterceptor); } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfinterceptorchainmanipulation |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_data_grid_with_spring/rhdg-downloads_datagrid |
Chapter 29. Upgrading Your Current System | Chapter 29. Upgrading Your Current System The procedure for performing an in-place upgrade on your current system is handled by the following utilities: The Preupgrade Assistant , which is a diagnostics utility that assesses your current system and identifies potential problems you might encounter during or after the upgrade. The Red Hat Upgrade Tool utility, which is used to upgrade a system from Red Hat Enterprise Linux version 6 to version 7. Note In-place upgrades are currently only supported on AMD64 and Intel 64 ( x86_64 ) systems and on IBM Z ( s390x ). Additionally, only the Server variant can be upgraded with Red Hat Upgrade Tool . Full documentation covering the process of upgrading from an earlier release of Red Hat Enterprise Linux to Red Hat Enterprise Linux 7 is available in the Red Hat Enterprise Linux 7 Migration Planning Guide . You can also use the Red Hat Enterprise Linux Upgrade Helper to guide you through migration from Red Hat Enterprise Linux 6 to 7. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-upgrading-your-current-system |
Chapter 2. Accessing the web console | Chapter 2. Accessing the web console The OpenShift Container Platform web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects. 2.1. Prerequisites JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets . Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. 2.2. Understanding and accessing the web console The web console runs as a pod on the master. The static assets required to run the web console are served by the pod. After OpenShift Container Platform is successfully installed using openshift-install create cluster , find the URL for the web console and login credentials for your installed cluster in the CLI output of the installation program. For example: Example output INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> Use those details to log in and access the web console. For existing clusters that you did not install, you can use oc whoami --show-console to see the web console URL. Important The dir parameter specifies the assets directory, which stores the manifest files, the ISO image, and the auth directory. The auth directory stores the kubeadmin-password and kubeconfig files. As a kubeadmin user, you can use the kubeconfig file to access the cluster with the following setting: export KUBECONFIG=<install_directory>/auth/kubeconfig . The kubeconfig is specific to the generated ISO image, so if the kubeconfig is set and the oc command fails, it is possible that the system did not boot with the generated ISO image. To perform debugging, during the bootstrap process, you can log in to the console as the core user by using the contents of the kubeadmin-password file. Additional resources Enabling feature sets using the web console | [
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/web_console/web-console |
Red Hat build of Apache Camel for Quarkus Reference | Red Hat build of Apache Camel for Quarkus Reference Red Hat build of Apache Camel 4.8 Red Hat build of Apache Camel for Quarkus provided by Red Hat | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_quarkus_reference/index |
8.28. device-mapper-multipath | 8.28. device-mapper-multipath 8.28.1. RHBA-2013:1574 - device-mapper-multipath bug fix and enhancement update Updated device-mapper-multipath packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools for managing multipath devices using the device-mapper multipath kernel module. Bug Fixes BZ#975676 Device Mapper Multipath (DM-Multipath) did not test pointers for NULL values before dereferencing them in the sysfs functions. Consequently, the multipathd daemon could terminate unexpectedly with a segmentation fault if a multipath device was resized while a path from the multipath device was being removed. With this update, DM-Multipath performs NULL pointer checks in sysfs functions and no longer crashes in the described scenario. BZ#889429 Prior to this update, the multipathd daemon did not start listening to udev events (uevents) until all the multipath paths that were discovered on system startup had been configured. As a consequence, multipathd was unable to handle paths that were discovered in the meantime. This bug has been fixed and multipathd now handles all paths as expected in the described scenario. BZ#889441 Due to incorrectly ordered udev rules for multipathd, link priority was not set for multipath paths when creating the multipath device using initramfs udev rules. Consequently, the /dev/disk/by-uuid/<uuid> symbolic links pointed to multipath paths instead of the multipath device. This could lead to boot problems under certain circumstances. With this update, the multipathd udev rules have been ordered correctly so that the aforementioned symbolic links point to the multipath device as expected. BZ#902585, BZ# 994277 Previously, DM-Multipath did not allocate enough space for the sysfs "state" attribute. Consequently, when a path was switched to the "transport-offline" state, a buffer overflow was triggered, resulting in an error message being logged into the system log. Also, DM-Multipath did not handle correctly paths in the "quiesce" state, which resulted in unnecessary failure of these paths. With this update, DM-Multipath allocates enough space to store all valid values of the sysfs "state" attribute. Paths in the "quiesce" state are now moved to the "pending" state, which prevents the paths from failing. BZ#928831 Previously, DM-Multipath did not verify whether the kernel supported the "retain_attached_hw_handler" mpath target feature before setting it. Consequently, the multipath devices which had set "retain_attached_hw_handler" did not work on machines with an older kernel without this feature support. With this update, DM-Multipath checks that the kernel supports the "retain_attached_hw_handler" feature before setting it. The multipath devices now work as expected on systems with older kernels utilizing newer versions of DM-Multipath. BZ#995251 In certain setups, the Redundant Disk Array Controller (RDAC) did not mark a path as down if the target controller reported an asymmetric access state of the target port to be "unavailable". As a consequence, the multipathd daemon repeatedly attempted to send I/O to an unusable path. This bug has been fixed, and multipathd no longer sends I/O to unusable paths in this case. BZ#1011341 Previously, the kpartx utility did not take into account the actual sector size of the device when creating partitions for the MS-DOS partition table, assuming a fixed size of 512 bytes per sector. Therefore, kpartx created partitions that were 1/8 of the proper size if the device with a sector size of 4 KB used the MS-DOS partition table. With this update, kpartx verifies the device's sector size and calculates the proper partition size if the device uses the MS-DOS partition table. BZ#892292 When displaying multipath topology for the specified multipath device, DM-Multipath unnecessarily obtained WWIDs for all the multipath paths for all the configured multipath devices. Consequently, the "multipath -l" command took an extensively longer time to complete than expected, especially on systems containing a large number of multipath devices. This behavior has been changed and when displaying topology of the specified multipath devices, the multipath command now acquires WWIDs only for paths belonging to these devices. BZ#974129 DM-Multipath previously set the fast_io_fail_tmo configuration option before setting the dev_loss_tmo option. However, a new value of fast_io_fail_tmo is not allowed to be greater than or equal to the current value of dev_loss_tmo. Therefore, when increasing values of both options and sysfs failed to set fast_io_fail_tmo due to the aforementioned limitation, even dev_loss_tmo could not have been set to a new value. With this update, if a new value of fast_io_fail_tmo would be too high, DM-Multipath sets it to the highest valid value, that is, the current value of dev_loss_tmo minus one. When setting both, the fast_io_fail_tmo and dev_loss_tmo options, dev_loss_tmo is now increased first. BZ#889987 When the detect_prio option was set, DM-Multipath did not verify whether a storage device supports asymmetric logical unit access (AULA) before setting up the AULA prioritizer on the device. Consequently, if the device did not support AULA, multipathd failed to detect AULA priority of the paths and emitted an error message to the system log. This bug has been fixed so that DM-Multipath now verifies whether a path can be set with AULA priority before setting up the AULA prioritizer on the storage device. BZ# 875199 Due to a NULL pointer dereference bug, multipathd could terminate with a segmentation fault when removing a failed path to a multipath device. This update adds a NULL pointer test to the code, preventing multipathd from a fail in this scenario. BZ#904836 When creating partitions for the GUID Partition Table (GPT), the kpartx utility did not account for the actual sector size of the devices with the sector size other than 512 bytes. As a result, kpartx created partitions that did not match the actual device partitions. With this update, kpartx correctly calculates a size of the created partitions to matches the actual block size of the storage device. BZ#918825 The kpartx utility did not properly release file descriptors allocated for loopback devices, causing file descriptor leaks. This update corrects the kpartx code, and kpartx no longer leaves file descriptors open after releasing loopback devices. BZ#958091 When the multipath command failed to load a multipath device map with read/write permissions, the multipath device could have been incorrectly set with read-only access. This happened because the multipath command always retried reloading the map table with read-only permissions even though the failure was not caused by an EROFS error. With this update, multipath correctly reloads a multipath device with read-only permissions only if the first load attempt has failed with an EROFS error. BZ# 986767 Previously, DM-Multipath did not prevent creating a multipath device to a tapdev device, which cannot be a subject to multipath I/O due to an unexpected path format. Consequently, if a multipath device was created on top of a tapdev device, multipathd terminated with a segmentation fault on the tapdev device's removal from the system. With this update, tapdev devices are blacklisted by default and this problem can no longer occur. Enhancements BZ#947798 This update adds a new default keyword, "reload_readwrite", to the /etc/multipath.conf file. If set to "yes", multipathd listens to path change events, and if the path has read-write access to the target storage, multipathd reloads it. This allows a multipath device to automatically grant read-write permissions, as soon as all its paths have read-write access to the storage, instead of requiring manual intervention. BZ#916667 The multipathd daemon now includes major and minor numbers of the target SCSI storage device along with the path's name to messages that are logged upon path's addition and removal. This allows for better association of the path with the particular multipath device. BZ#920448 In order to keep naming consistency of multipath devices, DM-Multipath now sets the smallest available user-friendly name even when the /etc/multipath/bindings file has been edited manually. If the smallest user-friendly name cannot be determined, DM-Multipath retains behavior and sets the multipath device symbolic name to the available largest name BZ# 924924 A new default parameter, "replace_wwid_whitespace", has been added to the /etc/multipath.conf file. If set to "yes", the scsi_id command in the default configuration section returns WWID with white space characters replaced by underscores for all applying SCSI devices. Users of device-mapper-multipath are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/device-mapper-multipath |
29.3. TCPGOSSIP Configuration Options | 29.3. TCPGOSSIP Configuration Options The following TCPGOSSIP specific properties may be configured: initial_hosts - Comma delimited list of hosts to be contacted for initial membership. reconnect_interval - Interval (in milliseconds) by which a disconnected node attempts to reconnect to the Gossip Router. sock_conn_timeout - Max time (in milliseconds) allowed for socket creation. Defaults to 1000 . sock_read_timeout - Max time (in milliseconds) to block on a read. A value of 0 will block forever. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/tcpgossip_configuration_options |
B.15. No Guest Virtual Machines are Present when libvirtd is Started | B.15. No Guest Virtual Machines are Present when libvirtd is Started Symptom The libvirt daemon is successfully started, but no guest virtual machines appear to be present. Investigation There are various possible causes of this problem. Performing these tests will help to determine the cause of this situation: Verify KVM kernel modules Verify that KVM kernel modules are inserted in the kernel: If you are using an AMD machine, verify the kvm_amd kernel modules are inserted in the kernel instead, using the similar command lsmod | grep kvm_amd in the root shell. If the modules are not present, insert them using the modprobe <modulename> command. Note Although it is uncommon, KVM virtualization support may be compiled into the kernel. In this case, modules are not needed. Verify virtualization extensions Verify that virtualization extensions are supported and enabled on the host: Enable virtualization extensions in your hardware's firmware configuration within the BIOS setup. Refer to your hardware documentation for further details on this. Verify client URI configuration Verify that the URI of the client is configured as desired: For example, this message shows the URI is connected to the VirtualBox hypervisor, not QEMU , and reveals a configuration error for a URI that is otherwise set to connect to a QEMU hypervisor. If the URI was correctly connecting to QEMU , the same message would appear instead as: This situation occurs when there are other hypervisors present, which libvirt may speak to by default. Solution After performing these tests, use the following command to view a list of guest virtual machines: | [
"virsh list --all Id Name State ---------------------------------------------------- #",
"lsmod | grep kvm kvm_intel 121346 0 kvm 328927 1 kvm_intel",
"egrep \"(vmx|svm)\" /proc/cpuinfo flags : fpu vme de pse tsc ... svm ... skinit wdt npt lbrv svm_lock nrip_save flags : fpu vme de pse tsc ... svm ... skinit wdt npt lbrv svm_lock nrip_save",
"virsh uri vbox:///system",
"virsh uri qemu:///system",
"virsh list --all"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/app_no_guest_machines |
Chapter 2. Deploying OpenShift Data Foundation on Google Cloud | Chapter 2. Deploying OpenShift Data Foundation on Google Cloud You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 2.2. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example: Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. Expand Advanced and select Full Deployment for the Deployment type option. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Choose either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.3. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.3.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 2.3.4. Verifying that the OpenShift Data Foundation specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/deploying_openshift_data_foundation_on_google_cloud |
Chapter 2. Upgrading Red Hat Satellite | Chapter 2. Upgrading Red Hat Satellite Use the following procedures to upgrade your existing Red Hat Satellite to Red Hat Satellite 6.16. 2.1. Satellite Server upgrade considerations This section describes how to upgrade Satellite Server from 6.15 to 6.16. You can upgrade from any minor version of Satellite Server 6.15. Before you begin Review Section 1.2, "Prerequisites" . Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Review and update your firewall configuration. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. If you have edited any of the default templates, back up the files either by cloning or exporting them. Cloning is the recommended method because that prevents them being overwritten in future updates or upgrades. To confirm if a template has been edited, you can view its History before you upgrade or view the changes in the audit log after an upgrade. In the Satellite web UI, navigate to Monitor > Audits and search for the template to see a record of changes made. If you use the export method, restore your changes by comparing the exported template and the default template, manually applying your changes. Optional: Clone your Satellite Server to test the upgrade. After you successfully test the upgrade on the clone, you can repeat the upgrade on your primary Satellite Server and discard the clone, or you can promote the clone to your primary Satellite Server and discard the primary Satellite Server. For more information, see Cloning Satellite Server in Administering Red Hat Satellite . Capsule considerations If you use content views to control updates to a Capsule Server's base operating system, or for Capsule Server repository, you must publish updated versions of those content views. Note that Satellite Server upgraded from 6.15 to 6.16 can use Capsule Servers still at 6.15. Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrade scenarios You cannot upgrade a self-registered Satellite. You must migrate a self-registered Satellite to the Red Hat Content Delivery Network (CDN) and then perform the upgrade. FIPS mode You cannot upgrade Satellite Server from a RHEL base system that is not operating in FIPS mode to a RHEL base system that is operating in FIPS mode. To run Satellite Server on a Red Hat Enterprise Linux base system operating in FIPS mode, you must install Satellite on a freshly provisioned RHEL base system operating in FIPS mode. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . 2.2. Upgrading a connected Satellite Server Use this procedure for a Satellite Server with access to the public internet Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the maintenance script runs during upgrading or updating. You can use the --noop option with the satellite-installer to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. Upgrade Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . On the Discovered Hosts page, power off and then delete the discovered hosts. From the Select an Organization menu, select each organization in turn and repeat the process to power off and delete the discovered hosts. Make a note to reboot these hosts when the upgrade is complete. Upgrade satellite-maintain to its version: If you are using an external database, upgrade your database to PostgreSQL 13. Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Optional: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: If the command told you to reboot, then reboot the system: steps Optional: Upgrade the operating system to Red Hat Enterprise Linux 9 on the upgraded Satellite Server. For more information, see Chapter 3, Upgrading Red Hat Enterprise Linux on Satellite or Capsule . 2.3. Synchronizing the new repositories You must enable and synchronize the new 6.16 repositories before you can upgrade Capsule Servers and Satellite clients. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Toggle the Recommended Repositories switch to the On position. From the list of results, expand the following repositories and click the Enable icon to enable the repositories: To upgrade Satellite clients, enable the Red Hat Satellite Client 6 repositories for all Red Hat Enterprise Linux versions that clients use. If you have Capsule Servers, to upgrade them, enable the following repositories too: Red Hat Satellite Capsule 6.16 (for RHEL 8 x86_64) (RPMs) Red Hat Satellite Maintenance 6.16 (for RHEL 8 x86_64) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - BaseOS) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - AppStream) (RPMs) Note If the 6.16 repositories are not available, refresh the Red Hat Subscription Manifest. In the Satellite web UI, navigate to Content > Subscriptions , click Manage Manifest , then click Refresh . In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the product to view the available repositories. Select the repositories for 6.16. Note that Red Hat Satellite Client 6 does not have a 6.16 version. Choose Red Hat Satellite Client 6 instead. Click Synchronize Now . Important If an error occurs when you try to synchronize a repository, refresh the manifest. If the problem persists, raise a support request. Do not delete the manifest from the Customer Portal or in the Satellite web UI; this removes all the entitlements of your content hosts. If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . 2.4. Performing post-upgrade tasks Optional: If the default provisioning templates have been changed during the upgrade, recreate any templates cloned from the default templates. If the custom code is executed before and/or after the provisioning process, use custom provisioning snippets to avoid recreating cloned templates. For more information about configuring custom provisioning snippets, see Creating Custom Provisioning Snippets in Provisioning hosts . Pulp is introducing more data about container manifests to the API. This information allows Katello to display manifest labels, annotations, and information about the manifest type, such as if it is bootable or represents flatpak content. As a result, migrations must be performed to pull this content from manifests into the database. This migration takes time, so a pre-migration runs automatically after the upgrade to 6.16 to reduce future upgrade downtime. While the pre-migration is running, Satellite Server is fully functional but uses more hardware resources. 2.5. Upgrading Capsule Servers This section describes how to upgrade Capsule Servers from 6.15 to 6.16. Before you begin Review Section 1.2, "Prerequisites" . You must upgrade Satellite Server before you can upgrade any Capsule Servers. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Ensure the Red Hat Satellite Capsule 6.16 repository is enabled in Satellite Server and synchronized. Ensure that you synchronize the required repositories on Satellite Server. For more information, see Section 2.3, "Synchronizing the new repositories" . If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . Ensure the Capsule's base system is registered to the newly upgraded Satellite Server. Ensure the Capsule has the correct organization and location settings in the newly upgraded Satellite Server. Review and update your firewall configuration prior to upgrading your Capsule Server. For more information, see Preparing Your Environment for Capsule Installation in Installing Capsule Server . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrading Capsule Servers Create a backup. On a virtual machine, take a snapshot. On a physical machine, create a backup. For information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Clean yum cache: Synchronize the satellite-capsule-6.16-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. Optional: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. The rubygem-foreman_maintain is installed from the Satellite Maintenance repository or upgraded from the Satellite Maintenance repository if currently installed. Ensure Capsule has access to satellite-maintenance-6.16-for-rhel-8-x86_64-rpms and execute: On Capsule Server, verify that the foreman_url setting points to the Satellite FQDN: Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups made earlier. Optional: If you use custom repositories, ensure that you enable these custom repositories after the upgrade completes. Upgrading Capsule Servers using remote execution Create a backup or take a snapshot. For more information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Maintenance Operations . From the Job template list, select Capsule Upgrade Playbook . In the Search Query field, enter the host name of the Capsule. Ensure that Apply to 1 host is displayed in the Resolves to field. In the target_version field, enter the target version of the Capsule. In the whitelist_options field, enter the options. Select the schedule for the job execution in Schedule . In the Type of query section, click Static Query . steps Optional: Upgrade the operating system to Red Hat Enterprise Linux 9 on the upgraded Satellite Server. For more information, see Chapter 3, Upgrading Red Hat Enterprise Linux on Satellite or Capsule . 2.6. Upgrading the external database You can upgrade an external database from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 while upgrading Satellite from 6.15 to 6.16. Prerequisites Create a new Red Hat Enterprise Linux 9 based host for PostgreSQL server that follows the external database on Red Hat Enterprise Linux 9 documentation. For more information, see Using External Databases with Satellite . Install PostgreSQL version 13 on the new Red Hat Enterprise Linux host. Procedure Create a backup. Restore the backup on the new server. Correct the permissions on the evr extension: If Satellite reaches the new database server via the old name, no further changes are required. Otherwise reconfigure Satellite to use the new name: | [
"satellite-maintain service stop",
"satellite-maintain service start",
"satellite-installer --foreman-proxy-dhcp-managed=false --foreman-proxy-dns-managed=false",
"satellite-maintain self-upgrade",
"satellite-maintain upgrade check",
"satellite-maintain upgrade run",
"reboot",
"yum clean metadata",
"satellite-maintain self-upgrade",
"grep foreman_url /etc/foreman-proxy/settings.yml",
"satellite-maintain upgrade check",
"satellite-maintain upgrade run",
"reboot",
"runuser -l postgres -c \"psql -d foreman -c \\\"UPDATE pg_extension SET extowner = (SELECT oid FROM pg_authid WHERE rolname='foreman') WHERE extname='evr';\\\"\"",
"satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_connected_red_hat_satellite_to_6.16/upgrading_satellite_upgrading-connected |
Chapter 3. Setting up Clair on standalone Red Hat Quay deployments | Chapter 3. Setting up Clair on standalone Red Hat Quay deployments For standalone Red Hat Quay deployments, you can set up Clair manually. Procedure In your Red Hat Quay installation directory, create a new directory for the Clair database data: USD mkdir /home/<user-name>/quay-poc/postgres-clairv4 Set the appropriate permissions for the postgres-clairv4 file by entering the following command: USD setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4 Deploy a Clair PostgreSQL database by entering the following command: USD sudo podman run -d --name postgresql-clairv4 \ -e POSTGRESQL_USER=clairuser \ -e POSTGRESQL_PASSWORD=clairpass \ -e POSTGRESQL_DATABASE=clair \ -e POSTGRESQL_ADMIN_PASSWORD=adminpass \ -p 5433:5432 \ -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z \ registry.redhat.io/rhel8/postgresql-13:1-109 Install the PostgreSQL uuid-ossp module for your Clair deployment: USD sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"" | psql -d clair -U postgres' Example output CREATE EXTENSION Note Clair requires the uuid-ossp extension to be added to its PostgreSQL database. For users with proper privileges, creating the extension will automatically be added by Clair. If users do not have the proper privileges, the extension must be added before start Clair. If the extension is not present, the following error will be displayed when Clair attempts to start: ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501) . Stop the Quay container if it is running and restart it in configuration mode, loading the existing configuration as a volume: Log in to the configuration tool and click Enable Security Scanning in the Security Scanner section of the UI. Set the HTTP endpoint for Clair using a port that is not already in use on the quay-server system, for example, 8081 . Create a pre-shared key (PSK) using the Generate PSK button. Security Scanner UI Validate and download the config.yaml file for Red Hat Quay, and then stop the Quay container that is running the configuration editor. Extract the new configuration bundle into your Red Hat Quay installation directory, for example: USD tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/ Create a folder for your Clair configuration file, for example: USD mkdir /etc/opt/clairv4/config/ Change into the Clair configuration folder: USD cd /etc/opt/clairv4/config/ Create a Clair configuration file, for example: http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: "MTU5YzA4Y2ZkNzJoMQ==" iss: ["quay"] # tracing and metrics trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" metrics: name: "prometheus" For more information about Clair's configuration format, see Clair configuration reference . Start Clair by using the container image, mounting in the configuration from the file you created: Note Running multiple Clair containers is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift Container Platform is strongly recommended. 3.1. Using Clair with an upstream image for Red Hat Quay For most users, independent upgrades of Clair from the current version (4.7.4) are unnecessary. In some cases, however, customers might want to pull an image of Clair from the upstream repository for various reasons, such as for specific bug fixes or to try new features that have not yet been released downstream. You can use the following procedure to run an upstream version of Clair with Red Hat Quay. Important Upstream versions of Clair have not been fully tested for compatibility with Red Hat Quay. As a result, this combination might cause issues with your deployment. Procedure Enter the following command to stop Clair if it is running: USD podman stop <clairv4_container_name> Navigate to the upstream repository , find the version of Clair that you want to use, and pull it to your local machine. For example: USD podman pull quay.io/projectquay/clair:nightly-2024-02-03 Start Clair by using the container image, mounting in the configuration from the file you created: USD podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ -v /etc/opt/clairv4/config:/clair:Z \ quay.io/projectquay/clair:nightly-2024-02-03 | [
"mkdir /home/<user-name>/quay-poc/postgres-clairv4",
"setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4",
"sudo podman run -d --name postgresql-clairv4 -e POSTGRESQL_USER=clairuser -e POSTGRESQL_PASSWORD=clairpass -e POSTGRESQL_DATABASE=clair -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5433:5432 -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-13:1-109",
"sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\"\" | psql -d clair -U postgres'",
"CREATE EXTENSION",
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret",
"tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/",
"mkdir /etc/opt/clairv4/config/",
"cd /etc/opt/clairv4/config/",
"http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: \"MTU5YzA4Y2ZkNzJoMQ==\" iss: [\"quay\"] tracing and metrics trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\" metrics: name: \"prometheus\"",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.12.8",
"podman stop <clairv4_container_name>",
"podman pull quay.io/projectquay/clair:nightly-2024-02-03",
"podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z quay.io/projectquay/clair:nightly-2024-02-03"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-standalone-configure |
probe::netfilter.ip.local_in | probe::netfilter.ip.local_in Name probe::netfilter.ip.local_in - Called on an incoming IP packet addressed to the local computer Synopsis netfilter.ip.local_in Values nf_stolen Constant used to signify a 'stolen' verdict length The length of the packet buffer contents, in bytes urg TCP URG flag (if protocol is TCP; ipv4 only) psh TCP PSH flag (if protocol is TCP; ipv4 only) nf_repeat Constant used to signify a 'repeat' verdict family IP address family outdev Address of net_device representing output device, 0 if unknown ipproto_tcp Constant used to signify that the packet protocol is TCP indev_name Name of network device packet was received on (if known) nf_accept Constant used to signify an 'accept' verdict outdev_name Name of network device packet will be routed to (if known) protocol Packet protocol from driver (ipv4 only) rst TCP RST flag (if protocol is TCP; ipv4 only) nf_stop Constant used to signify a 'stop' verdict nf_queue Constant used to signify a 'queue' verdict dport TCP or UDP destination port (ipv4 only) iphdr Address of IP header fin TCP FIN flag (if protocol is TCP; ipv4 only) syn TCP SYN flag (if protocol is TCP; ipv4 only) ack TCP ACK flag (if protocol is TCP; ipv4 only) ipproto_udp Constant used to signify that the packet protocol is UDP saddr A string representing the source IP address sport TCP or UDP source port (ipv4 only) pf Protocol family -- either " ipv4 " or " ipv6 " daddr A string representing the destination IP address nf_drop Constant used to signify a 'drop' verdict indev Address of net_device representing input device, 0 if unknown | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netfilter-ip-local-in |
Chapter 3. Installation and update | Chapter 3. Installation and update 3.1. OpenShift Container Platform installation overview The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain. These two basic types of OpenShift Container Platform clusters are frequently called installer-provisioned infrastructure clusters and user-provisioned infrastructure clusters. Both types of clusters have the following characteristics: Highly available infrastructure with no single points of failure is available by default Administrators maintain control over what updates are applied and when You use the same installation program to deploy both types of clusters. The main assets generated by the installation program are the Ignition config files for the bootstrap, master, and worker machines. With these three configurations and correctly configured infrastructure, you can start an OpenShift Container Platform cluster. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installation. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel. The ultimate target is a running cluster. By meeting dependencies instead of running commands, the installation program is able to recognize and use existing components instead of running the commands to create them again. The following diagram shows a subset of the installation targets and dependencies: Figure 3.1. OpenShift Container Platform installation targets and dependencies After installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. It includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.10 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as an Atomic OSTree repository that is embedded in a container image that is rolled out across the cluster by an Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree. Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, via in-place upgrades that keep the entire platform up-to-date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 3.1.1. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.10, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Microsoft Azure Stack Hub Red Hat OpenStack Platform (RHOSP) versions 16.1 and 16.2 The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . IBM Cloud VPC Red Hat Virtualization (RHV) VMware vSphere VMware Cloud (VMC) on AWS Alibaba Cloud Bare metal For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms Mixing cloud provider components, such as using a persistent storage framework from a differing platform than what the cluster is installed on In OpenShift Container Platform 4.10, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub GCP RHOSP versions 16.1 and 16.2 RHV VMware vSphere VMware Cloud on AWS Bare metal IBM Z or LinuxONE IBM Power Depending on the supported cases for the platform, installations on user-provisioned infrastructure allow you to run machines with full internet access, place your cluster behind a proxy, or perform a restricted network installation . In a restricted network installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a restricted network installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. 3.1.2. Installation process When you install an OpenShift Container Platform cluster, you download the installation program from the appropriate Infrastructure Provider page on the OpenShift Cluster Manager site. This site manages: REST API for accounts Registry tokens, which are the pull secrets that you use to obtain the required components Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics In OpenShift Container Platform 4.10, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. You use three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important It is possible to modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself, including: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details Because each machine in the cluster requires information about the cluster when it is provisioned, OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 3.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure) The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure) The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure) The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments. Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 3.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. steps Selecting a cluster installation method and preparing it for users | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/architecture/architecture-installation |
Chapter 18. Configuring Logs | Chapter 18. Configuring Logs The Certificate System subsystem log files record events related to operations within that specific subsystem instance. For each subsystem, different logs are kept for issues such as installation, access, and web servers. All subsystems have similar log configuration, options, and administrative paths. For details about log administration after the installation, see the Configuring Subsystem Logs section in the Red Hat Certificate System Administration Guide . For an overview on logs, see Section 2.3.14, "Logs" . 18.1. Certificate System Log Settings The way that logs are configured can affect Certificate System performance. For example, log file rotation keeps logs from becoming too large, which slows down subsystem performance. This section explains the different kinds of logs recorded by Certificate System subsystems and covers important concepts such as log file rotation, buffered logging, and available log levels. 18.1.1. Services That Are Logged All major components and protocols of Certificate System log messages to log files. Table 18.1, "Services Logged" lists services that are logged by default. To view messages logged by a specific service, customize log settings accordingly. Table 18.1. Services Logged Service Description ACLs Logs events related to access control lists. Administration Logs events related to administration activities, such as HTTPS communication between the Console and the instance. All Logs events related to all the services. Authentication Logs events related to activity with the authentication module. Certificate Authority Logs events related to the Certificate Manager. Database Logs events related to activity with the internal database. HTTP Logs events related to the HTTP activity of the server. Note that HTTP events are actually logged to the errors log belonging to the Apache server incorporated with the Certificate System to provide HTTP services. Key Recovery Authority Logs events related to the KRA. LDAP Logs events related to activity with the LDAP directory, which is used for publishing certificates and CRLs. OCSP Logs events related to OCSP, such as OCSP status GET requests. Others Logs events related to other activities, such as command-line utilities and other processes. Request Queue Logs events related to the request queue activity. User and Group Logs events related to users and groups of the instance. 18.1.2. Log Levels (Message Categories) The different events logged by Certificate System services are determined by the log levels, which makes identifying and filtering events simpler. The different Certificate System log levels are listed in Table 18.2, "Log Levels and Corresponding Log Messages" . Log levels are represented by numbers 0 to 10 , each number indicating the level of logging to be performed by the server. The level sets how detailed the logging should be. A higher priority level means less detail because only events of high priority are logged. Note The default log level is 1 and this value should not be changed. To enable debug logging, see Section 18.3.3, "Additional Configuration for Debug Log" . Table 18.2, "Log Levels and Corresponding Log Messages" is provided for reference to better understand log messages. Table 18.2. Log Levels and Corresponding Log Messages Log level Message category Description 0 Debugging These messages contain debugging information. This level is not recommended for regular use because it generates too much information. 1 Informational (default selection for audit log) These messages provide general information about the state of the Certificate System, including status messages such as Certificate System initialization complete and Request for operation succeeded . 2 Warning These messages are warnings only and do not indicate any failure in the normal operation of the server. 3 Failure; the default selection for system and error logs These messages indicate errors and failures that prevent the server from operating normally, including failures to perform a certificate service operation ( User authentication failed or Certificate revoked ) and unexpected situations that can cause irrevocable errors ( The server cannot send back the request it processed for a client through the same channel the request came from the client ). 4 Misconfiguration These messages indicate that a misconfiguration in the server is causing an error. 5 Catastrophic failure These messages indicate that, because of an error, the service cannot continue running. 6 Security-related events These messages identify occurrences that affect the security of the server. For example, Privileged access attempted by user with revoked or unlisted certificate . 7 PDU-related events (debugging) These messages contain debugging information for PDU events. This level is not recommended for regular use because it generates more information than is normally useful. 8 PDU-related events These messages relate transactions and rules processed on a PDU, such as creating MAC tokens. 9 PDU-related events This log levels provides verbose log messages for events processed on a PDU, such as creating MAC tokens. 10 All logging levels This log level enables all logging levels. Log levels can be used to filter log entries based on the severity of an event. By default, log level 3 (Failure) is set for all services. The log level is successive; specifying a value of 3 causes levels 4, 5, and 6 to be logged. Log data can be extensive, especially at lower (more verbose) logging levels. Make sure that the host machine has sufficient disk space for all the log files. It is also important to define the logging level, log rotation, and server-backup policies appropriately so that all the log files are backed up and the host system does not get overloaded; otherwise, information can be lost. 18.1.3. Buffered and Unbuffered Logging The Java subsystems support buffered logging for all types of logs. The server can be configured for either buffered or unbuffered logging. If buffered logging is configured, the server creates buffers for the corresponding logs and holds the messages in the buffers for as long as possible. The server flushes out the messages to the log files only when one of the following conditions occurs: The buffer gets full. The buffer is full when the buffer size is equal to or greater than the value specified by the bufferSize configuration parameter. The default value for this parameter is 512 KB. The flush interval for the buffer is reached. The flush interval is reached when the time interval since the last buffer flush is equal to or greater than the value specified by the flushInterval configuration parameter. The default value for this parameter is 5 seconds. When current logs are read from Console. The server retrieves the latest log when it is queried for current logs. If the server is configured for unbuffered logging, the server flushes out messages as they are generated to the log files. Because the server performs an I/O operation (writing to the log file) each time a message is generated, configuring the server for unbuffered logging decreases performance. Setting log parameters is described in the Configuring Logs in the Console section in the Red Hat Certificate System Administration Guide . 18.1.4. Log File Rotation The subsystem logs have an optional log setting that allows them to be rotated and start a new log file instead of letting log files grow indefinitely. Log files are rotated when either of the following occur: The size limit for the corresponding file is reached. The size of the corresponding log file is equal to or greater than the value specified by the maxFileSize configuration parameter. The default value for this parameter is 100 KB. The age limit for the corresponding file is reached. The corresponding log file is equal to or older than the interval specified by the rolloverInterval configuration parameter. The default value for this parameter is 2592000 seconds (every thirty days). When a log file is rotated, the old file is named using the name of the file with an appended time stamp. The appended time stamp is an integer that indicates the date and time the corresponding active log file was rotated. The date and time have the forms YYYYMMDD (year, month, day) and HHMMSS (hour, minute, second). Log files, especially the audit log file, contain critical information. These files should be periodically archived to some backup medium by copying the entire log directory to an archive medium. Note The Certificate System does not provide any tool or utility for archiving log files. The Certificate System provides a command-line utility, signtool , that signs log files before archiving them as a means of tamper detection. Signing log files is an alternative to the signed audit logs feature. Signed audit logs create audit logs that are automatically signed with a subsystem signing certificate. Rotated log files are not deleted. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/configuring_logs |
Cluster APIs | Cluster APIs OpenShift Container Platform 4.17 Reference guide for cluster APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/cluster_apis/index |
23.11. Resource Partitioning | 23.11. Resource Partitioning Hypervisors may allow for virtual machines to be placed into resource partitions, potentially with nesting of said partitions. The <resource> element groups together configurations related to resource partitioning. It currently supports a child element partition whose content defines the path of the resource partition in which to place the domain. If no partition is listed, then the domain will be placed in a default partition. The partition must be created prior to starting the guest virtual machine. Only the (hypervisor-specific) default partition can be assumed to exist by default. <resource> <partition>/virtualmachines/production</partition> </resource> Figure 23.13. Resource partitioning Resource partitions are currently supported by the KVM and LXC drivers, which map partition paths to cgroups directories in all mounted controllers. | [
"<resource> <partition>/virtualmachines/production</partition> </resource>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-resource_partitioning |
Release Notes | Release Notes Red Hat Ceph Storage 6.0 Release notes for Red Hat Ceph Storage 6.0 Red Hat Ceph Storage Documentation Team | [
"service_type: prometheus placement: count: 1 spec: retention_time: \"1y\" retention_size: \"1GB\"",
"If no embedded Grafana Dashboard appeared below, please follow this link to check if Grafana is reachable and there are no HTTPS certificate issues. You may need to reload this page after accepting any Browser certificate exceptions.",
"ceph config set mgr mgr/dashboard/cross_origin_url http://localhost:4200",
"time rgw-restore-bucket-index --proceed serp-bu-ver-1 default.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. `marker` is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5. `bucket_id` is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5. Error: this bucket appears to be versioned, and this tool cannot work with versioned buckets.",
"ceph config set osd osd_mclock_force_run_benchmark_on_init true",
"ceph config rm OSD. OSD_ID osd_mclock_max_capacity_iops_[hdd,ssd]",
"ceph config rm osd.0 osd_mclock_max_capacity_iops_hdd"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6.0/html-single/release_notes/%7Badministration-guide%7D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.