title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
B.89.2. RHBA-2010:0852 - sssd bug fix update
B.89.2. RHBA-2010:0852 - sssd bug fix update An updated sssd package that addresses group assignment and multilib issues is now available for Red Hat Enterprise Linux 6. The System Security Services Daemon (SSSD) provides a set of daemons to manage access to remote directories and authentication mechanisms. It provides an NSS and PAM interface toward the system and a pluggable backend system to connect to multiple different account sources. It is also the basis to provide client auditing and policy services for projects like FreeIPA. Bug Fixes BZ# 637070 Previously, Kerberos applications running on the secondary architecture of a multilib platform (e.g. i686 on x86_64) would not be able to identify the Kerberos server for authentication. With this update, the Kerberos locator plugin is located in the sssd-client package to allow installation of both the 32-bit and 64-bit versions on 64-bit systems. BZ# 642412 Previously, users would not always be assigned to all initgroups for which they were a member in LDAP. This could cause several issues related to group-based permissions. With this update, the initgroups() call always returns all groups for the specified user. BZ# 649312 Previously, SSSD could remove legitimate groups that were only identified as a user's primary group when the cache cleanup routine ran. This could cause issues with group-based access control permissions such as access.conf and sudoers. With this update, SSSD checks also whether there are users who have this group as their primary group ID. All SSSD users are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2010-0852
2.6. iostat
2.6. iostat The iostat tool, provided by the sysstat package, monitors and reports on system input/output device loading to help administrators make decisions about how to balance input/output load between physical disks. The iostat tool reports on processor or device utilization since iostat was last run, or since boot. You can focus the output of these reports on specific devices by using the parameters defined in the iostat (1) manual page. For detailed information on the await value and what can cause its values to be high, see the following Red Hat Knowledgebase article: What exactly is the meaning of value "await" reported by iostat?
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-iostat
Chapter 7. File Systems
Chapter 7. File Systems gfs2-utils rebase to version 3.1.8 The gfs2-utils package has been rebased to version 3.1.8, which provides important fixes and a number of enhancements: * The performance of the fsck.gfs2 , mkfs.gfs2 , and gfs2_edit utilities has been improved. * The fsck.gfs2 utility now performs better checking of journals, the jindex, system inodes, and the inode 'goal' values. * The gfs2_jadd and gfs2_grow utilities are now separate programs instead of symlinks to mkfs.gfs2 . * The test suite and related documentation have been improved. * The package no longer depends on Perl. GFS2 now prevents users from exceeding their quotas Previously, GFS2 only checked quota violations after the completion of operations, which could result in users or groups exceeding their allotted quotas. This behavior has been fixed, and GFS2 now predicts how many blocks an operation would allocate and checks if allocating them would violate quotas. Operations that would result in quota violations are disallowed, and users thus never exceed their allotted quotas. XFS rebase to version 4.1 XFS has been upgraded to upstream version 4.1 including minor bug fixes, refactorings, reworks of certain internal mechanisms, such as logging, pcpu accounting, and new mmap locking. On top of the upstream changes, this update extends the rename() function to add cross-rename (a symmetric variant of rename()) and whiteout handling. cifs rebase to version 3.17 The CIFS module has been upgraded to upstream version 3.17, which provides various minor fixes and new features for Server Message Block (SMB) 2 and 3: SMB version 2.0, 2.1, 3.0, and 3.0.2. Note that using the Linux kernel CIFS module with SMB protocol 3.1.1 is currently experimental and the functionality is unavailable in kernels provided by Red Hat. In addition, features introduced in SMB version 3.0.2 are defined as optional and are not currently supported by Red Hat Enterprise Linux. Changes in NFS in Red Hat Enterprise Linux 7.2 Fallocate support allows preallocation of files on the server. The SEEK_HOLE and SEEK_DATA extensions to the fseek() function make it possible to locate holes or data quickly and efficiently. Red Hat Enterprise Linux 7.2 also adds support for flexible file layout on NFSv4 clients described in the Technology Previews section.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/file_systems
8.24. crash-gcore-command
8.24. crash-gcore-command 8.24.1. RHBA-2013:1720 - crash-gcore-command bug fix Updated crash-gcore-command packages that fix one bug are now available for Red Hat Enterprise Linux 6. The crash-gcore-command packages contain an extension module for the crash utility that adds a "gcore" command which can create a core dump file of a user-space task that was running in a kernel dumpfile. Bug Fix BZ# 890232 Due to a backported madvise/MADV_DONTDUMP change in the Red Hat Enterprise Linux 6 kernel, VDSO (Virtual Dynamically linked Shared Objects) and vsyscall pages were missing in the generated process core dump. With this update, VDSO and vsyscall pages are always contained in the generated process core dump. Users of crash-gcore-command are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/crash-gcore-command
6.14.2. Multicast Configuration
6.14.2. Multicast Configuration If you do not specify a multicast address in the cluster configuration file, the Red Hat High Availability Add-On software creates one based on the cluster ID. It generates the lower 16 bits of the address and appends them to the upper portion of the address according to whether the IP protocol is IPv4 or IPv6: For IPv4 - The address formed is 239.192. plus the lower 16 bits generated by Red Hat High Availability Add-On software. For IPv6 - The address formed is FF15:: plus the lower 16 bits generated by Red Hat High Availability Add-On software. Note The cluster ID is a unique identifier that cman generates for each cluster. To view the cluster ID, run the cman_tool status command on a cluster node. You can manually specify a multicast address in the cluster configuration file with the following command: Note that this command resets all other properties that you can set with the --setmulticast option to their default values, as described in Section 6.1.5, "Commands that Overwrite Settings" . If you specify a multicast address, you should use the 239.192.x.x series (or FF15:: for IPv6) that cman uses. Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware. If you specify or modify a multicast address, you must restart the cluster for this to take effect. For information on starting and stopping a cluster with the ccs command, see Section 7.2, "Starting and Stopping a Cluster" . Note If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance. To remove a multicast address from a configuration file, use the --setmulticast option of the ccs but do not specify a multicast address:
[ "ccs -h host --setmulticast multicastaddress", "ccs -h host --setmulticast" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-networkconfig-ccs-CA
Chapter 10. Contacting Red Hat support for service
Chapter 10. Contacting Red Hat support for service If the information in this guide did not help you to solve the problem, this chapter explains how you contact the Red Hat support service. Prerequisites Red Hat support account. 10.1. Providing information to Red Hat Support engineers If you are unable to fix problems related to Red Hat Ceph Storage, contact the Red Hat Support Service and provide sufficient amount of information that helps the support engineers to faster troubleshoot the problem you encounter. Prerequisites Root-level access to the node. Red Hat support account. Procedure Open a support ticket on the Red Hat Customer Portal . Ideally, attach an sosreport to the ticket. See the What is a sosreport and how to create one in Red Hat Enterprise Linux? solution for details. If the Ceph daemons fail with a segmentation fault, consider generating a human-readable core dump file. See Generating readable core dump files for details. 10.2. Generating readable core dump files When a Ceph daemon terminates unexpectedly with a segmentation fault, gather the information about its failure and provide it to the Red Hat Support Engineers. Such information speeds up the initial investigation. Also, the Support Engineers can compare the information from the core dump files with Red Hat Ceph Storage cluster known issues. Prerequisites Install the debuginfo packages if they are not installed already. Enable the following repositories to install the required debuginfo packages. Example Once the repository is enabled, you can install the debug info packages that you need from this list of supported packages: Ensure that the gdb package is installed and if it is not, install it: Example Section 10.2.1, "Generating readable core dump files in containerized deployments" 10.2.1. Generating readable core dump files in containerized deployments You can generate a core dump file for Red Hat Ceph Storage 6 which involves two scenarios of capturing the core dump file: When a Ceph process terminates unexpectedly due to the SIGILL, SIGTRAP, SIGABRT, or SIGSEGV error. or Manually, for example for debugging issues such as Ceph processes are consuming high CPU cycles, or are not responding. Prerequisites Root-level access to the container node running the Ceph containers. Installation of the appropriate debugging packages. Installation of the GNU Project Debugger ( gdb ) package. Ensure the hosts has at least 8 GB RAM. If there are multiple daemons on the host, then Red Hat recommends more RAM. Procedure If a Ceph process terminates unexpectedly due to the SIGILL, SIGTRAP, SIGABRT, or SIGSEGV error: Set the core pattern to the systemd-coredump service on the node where the container with the failed Ceph process is running: Example Watch for the container failure due to a Ceph process and search for the core dump file in the /var/lib/systemd/coredump/ directory: Example To manually capture a core dump file for the Ceph Monitors and Ceph OSDs : Get the MONITOR_ID or the OSD_ID and enter the container: Syntax Example Install the procps-ng and gdb packages inside the container: Example Find the process ID: Syntax Replace PROCESS with the name of the running process, for example ceph-mon or ceph-osd . Example Generate the core dump file: Syntax Replace ID with the ID of the process that you got from the step, for example 18110 : Example Verify that the core dump file has been generated correctly. Example Copy the core dump file outside of the Ceph Monitor container: Syntax Replace MONITOR_ID with the ID number of the Ceph Monitor and replace MONITOR_PID with the process ID number. To manually capture a core dump file for other Ceph daemons: Log in to the cephadm shell : Example Enable ptrace for the daemons: Example Redeploy the daemon service: Syntax Example Exit the cephadm shell and log in to the host where the daemons are deployed: Example Get the DAEMON_ID and enter the container: Example Install the procps-ng and gdb packages: Example Get the PID of process: Example Gather core dump: Syntax Example Verify that the core dump file has been generated correctly. Example Copy the core dump file outside the container: Syntax Replace DAEMON_ID with the ID number of the Ceph daemon and replace PID with the process ID number. To allow systemd-coredump to successfully store the core dump for a crashed ceph daemon: Set DefaultLimitCORE to infinity in /etc/systemd/system.conf to allow core dump collection for a crashed process: Syntax Restart systemd or the node to apply the updated systemd settings: Syntax Verify the core dump files associated with daemon crashes: Syntax Upload the core dump file for analysis to a Red Hat support case. See Providing information to Red Hat Support engineers for details. Additional Resources The How to use gdb to generate a readable backtrace from an application core solution on the Red Hat Customer Portal The How to enable core file dumps when an application crashes or segmentation faults solution on the Red Hat Customer Portal
[ "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms yum --enable=rhceph-6-tools-for-rhel-9-x86_64-debug-rpms", "ceph-base-debuginfo ceph-common-debuginfo ceph-debugsource ceph-fuse-debuginfo ceph-immutable-object-cache-debuginfo ceph-mds-debuginfo ceph-mgr-debuginfo ceph-mon-debuginfo ceph-osd-debuginfo ceph-radosgw-debuginfo cephfs-mirror-debuginfo", "dnf install gdb", "echo \"| /usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e\" > /proc/sys/kernel/core_pattern", "ls -ltr /var/lib/systemd/coredump total 8232 -rw-r-----. 1 root root 8427548 Jan 22 19:24 core.ceph-osd.167.5ede29340b6c4fe4845147f847514c12.15622.1584573794000000.xz", "ps exec -it MONITOR_ID_OR_OSD_ID bash", "podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-osd-2 bash", "dnf install procps-ng gdb", "ps -aef | grep PROCESS | grep -v run", "ps -aef | grep ceph-mon | grep -v run ceph 15390 15266 0 18:54 ? 00:00:29 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 5 ceph 18110 17985 1 19:40 ? 00:00:08 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 2", "gcore ID", "gcore 18110 warning: target file /proc/18110/cmdline contained unexpected null characters Saved corefile core.18110", "ls -ltr total 709772 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.18110", "cp ceph-mon- MONITOR_ID :/tmp/mon.core. MONITOR_PID /tmp", "cephadm shell", "ceph config set mgr mgr/cephadm/allow_ptrace true", "ceph orch redeploy SERVICE_ID", "ceph orch redeploy mgr ceph orch redeploy rgw.rgw.1", "exit ssh [email protected]", "podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-rgw-rgw-1-host04 bash", "dnf install procps-ng gdb", "ps aux | grep rados ceph 6 0.3 2.8 5334140 109052 ? Sl May10 5:25 /usr/bin/radosgw -n client.rgw.rgw.1.host04 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug", "gcore PID", "gcore 6", "ls -ltr total 108798 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.6", "cp ceph-mon- DAEMON_ID :/tmp/mon.core. PID /tmp", "cat /etc/systemd/system.conf DefaultLimitCORE=infinity", "sudo systemctl daemon-reexec", "ls -ltr /var/lib/systemd/coredump/" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/troubleshooting_guide/contacting-red-hat-support-for-service
Authentication and authorization
Authentication and authorization OpenShift Container Platform 4.18 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/index
Chapter 1. Overview of alt-java
Chapter 1. Overview of alt-java Red Hat packages contain a mitigation for the SSB vulnerability in the form of a patch for the java binary. This patch disables an optimization present in x86-64 (Intel and AMD) processors. Disabling that optimization reduces the risk of kernel side-channel attacks, but also reduces CPU performance. Since the patch reduces performance, it has been removed from the java launcher. A new binary alt-java is now available. From the January 2021 Critical Patch Update release (1.8.0 282.b08, 11.0.10.9) onwards, the alt-java binary is included in Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 11 GA RPM packages. Additional resources For more information about the performance impact of SSB mitigation, see Kernel Side-Channel Attack using Speculative Store Bypass - CVE-2018-3639 on the Red Hat Customer Portal For more information about the java binary patch, see RH1566890 in the Red Hat Bugzilla documentation.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_alt-java/alt-java-overview
Chapter 4. Capacity metering using the Telemetry service
Chapter 4. Capacity metering using the Telemetry service The OpenStack Telemetry service provides usage metrics that you can use for billing, charge-back, and show-back purposes. Such metrics data can also be used by third-party applications to plan for capacity on the cluster and can also be leveraged for auto-scaling virtual instances using OpenStack Heat. For more information, see Auto Scaling for Instances . You can use the combination of Ceilometer and Gnocchi for monitoring and alarms. This is supported on small-size clusters and with known limitations. For real-time monitoring, Red Hat OpenStack Platform ships with agents that provide metrics data, and can be consumed by separate monitoring infrastructure and applications. For more information, see Monitoring Tools Configuration . 4.1. Viewing measures List all the measures for a particular resource: List only measures for a particular resource, within a range of timestamps: The timestamp variables <START_TIME> and <STOP_TIME> use the format iso-dateThh:mm:ss . 4.2. Creating new measures You can use measures to send data to the Telemetry service, and they do not need to correspond to a previously-defined meter. For example: 4.3. Example: Viewing cloud usage measures This example shows the average memory usage of all instances for each project. 4.4. Example: Viewing L3 cache use If your Intel hardware and libvirt version supports Cache Monitoring Technology (CMT), you can use the cpu_l3_cache meter to monitor the amount of L3 cache used by an instance. Monitoring the L3 cache requires the following: cmt in the LibvirtEnabledPerfEvents parameter. cpu_l3_cache in the gnocchi_resources.yaml file. cpu_l3_cache in the Ceilometer polling.yaml file. Enableing L3 cache monitoring To enable L3 cache monitoring: Create a YAML file for telemetry (for example, ceilometer-environment.yaml ) and add cmt to the LibvirtEnabledPerfEvents parameter. parameter_defaults: LibvirtEnabledPerfEvents: cmt Launch the overcloud with this YAML file. #!/bin/bash openstack overcloud deploy \ --templates \ <additional templates> \ -e /home/stack/ceilometer-environment.yaml Verify that cpu_l3_cache is enabled in gnocchi on the Compute node. Verify that cpu_l3_cache is enabled for Telemetry polling. If cpu_l3_cache is not enabled for Telemetry, enable it and restart the service. Note This podman change does not persist over a reboot. After you have launched a guest instance on this compute node, you can use the gnocchi measures show command to monitor the CMT metrics. 4.5. View Existing Alarms To list the existing Telemetry alarms, use the aodh command. For example: To list the meters assigned to a resource, specify the UUID of the resource (an instance, image, or volume, among others). For example: 4.6. Create an Alarm You can use aodh to create an alarm that activates when a threshold value is reached. In this example, the alarm activates and adds a log entry when the average CPU utilization for an individual instance exceeds 80%. A query is used to isolate the specific instance's id ( 94619081-abf5-4f1f-81c7-9cedaa872403 ) for monitoring purposes: To edit an existing threshold alarm, use the aodh alarm update command. For example, to increase the alarm threshold to 75%: 4.7. Disable or Delete an Alarm To disable an alarm: To delete an alarm: 4.8. Example: Monitor the disk activity of instances The following example demonstrates how to use an Aodh alarm to monitor the cumulative disk activity for all the instances contained within a particular project. 1. Review the existing projects, and select the appropriate UUID of the project you need to monitor. This example uses the admin project: 2. Use the project's UUID to create an alarm that analyses the sum() of all read requests generated by the instances in the admin project (the query can be further restrained with the --query parameter). 4.9. Example: Monitor CPU usage If you want to monitor an instance's performance, you would start by examining the gnocchi database to identify which metrics you can monitor, such as memory or CPU usage. For example, run gnocchi resource show against an instance to identify which metrics can be monitored: Query the available metrics for a particular instance UUID: In this result, the metrics value lists the components you can monitor using Aodh alarms, for example cpu_util . To monitor CPU usage, you will need the cpu_util metric. To see more information on this metric: archive_policy - Defines the aggregation interval for calculating the std, count, min, max, sum, mean values. Use Aodh to create a monitoring task that queries cpu_util . This task will trigger events based on the settings you specify. For example, to raise a log entry when an instance's CPU spikes over 80% for an extended duration: comparison-operator - The ge operator defines that the alarm will trigger if the CPU usage is greater than (or equal to) 80%. granularity - Metrics have an archive policy associated with them; the policy can have various granularities (for example, 5 minutes aggregation for 1 hour + 1 hour aggregation over a month). The granularity value must match the duration described in the archive policy. evaluation-periods - Number of granularity periods that need to pass before the alarm will trigger. For example, setting this value to 2 will mean that the CPU usage will need to be over 80% for two polling periods before the alarm will trigger. [u'log://'] - This value will log events to your Aodh log file. Note You can define different actions to run when an alarm is triggered ( alarm_actions ), and when it returns to a normal state ( ok_actions ), such as a webhook URL. To check if your alarm has been triggered, query the alarm's history: 4.10. Manage Resource Types Telemetry resource types that were previously hardcoded can now be managed by the gnocchi client. You can use the gnocchi client to create, view, and delete resource types, and you can use the gnocchi API to update or delete attributes. 1. Create a new resource-type : 2. Review the configuration of the resource-type : 3. Delete the resource-type : Note You cannot delete a resource type if a resource is using it.
[ "openstack metric measures show --resource-id UUID METER_NAME", "openstack metric measures show --aggregation mean --start <START_TIME> --stop <STOP_TIME> --resource-id UUID METER_NAME", "openstack metrics measures add -m 2015-01-12T17:56:23@42 --resource-id UUID METER_NAME", "openstack metric measures aggregation --resource-type instance --groupby project_id -m memory", "parameter_defaults: LibvirtEnabledPerfEvents: cmt", "#!/bin/bash openstack overcloud deploy --templates <additional templates> -e /home/stack/ceilometer-environment.yaml", "sudo -i podman exec -ti ceilometer_agent_compute cat /etc/ceilometer/gnocchi_resources.yaml | grep cpu_l3_cache", "podman exec -ti ceilometer_agent_compute cat /etc/ceilometer/polling.yaml | grep cpu_l3_cache", "podman exec -ti ceilometer_agent_compute echo \" - cpu_l3_cache\" >> /etc/ceilometer/polling.yaml podman exec -ti ceilometer_agent_compute pkill -HUP -f \"ceilometer.*master process\"", "(overcloud) [stack@undercloud-0 ~]USD gnocchi measures show --resource-id a6491d92-b2c8-4f6d-94ba-edc9dfde23ac cpu_l3_cache +---------------------------+-------------+-----------+ | timestamp | granularity | value | +---------------------------+-------------+-----------+ | 2017-10-25T09:40:00+00:00 | 300.0 | 1966080.0 | | 2017-10-25T09:45:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T09:50:00+00:00 | 300.0 | 2129920.0 | | 2017-10-25T09:55:00+00:00 | 300.0 | 1966080.0 | | 2017-10-25T10:00:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T10:05:00+00:00 | 300.0 | 2195456.0 | | 2017-10-25T10:10:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T10:15:00+00:00 | 300.0 | 1998848.0 | | 2017-10-25T10:20:00+00:00 | 300.0 | 2097152.0 | | 2017-10-25T10:25:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T10:30:00+00:00 | 300.0 | 1966080.0 | | 2017-10-25T10:35:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T10:40:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T10:45:00+00:00 | 300.0 | 1933312.0 | | 2017-10-25T10:50:00+00:00 | 300.0 | 2850816.0 | | 2017-10-25T10:55:00+00:00 | 300.0 | 2359296.0 | | 2017-10-25T11:00:00+00:00 | 300.0 | 2293760.0 | +---------------------------+-------------+-----------+", "aodh alarm list +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | 922f899c-27c8-4c7d-a2cf-107be51ca90a | gnocchi_aggregation_by_resources_threshold | iops-monitor-read-requests | insufficient data | low | True | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+", "gnocchi resource show 5e3fcbe2-7aab-475d-b42c-a440aa42e5ad", "aodh alarm create --type gnocchi_aggregation_by_resources_threshold --name cpu_usage_high --metric cpu_util --threshold 80 --aggregation-method sum --resource-type instance --query '{\"=\": {\"id\": \"94619081-abf5-4f1f-81c7-9cedaa872403\"}}' --alarm-action 'log://' +---------------------------+-------------------------------------------------------+ | Field | Value | +---------------------------+-------------------------------------------------------+ | aggregation_method | sum | | alarm_actions | [u'log://'] | | alarm_id | b794adc7-ed4f-4edb-ace4-88cbe4674a94 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 60 | | insufficient_data_actions | [] | | metric | cpu_util | | name | cpu_usage_high | | ok_actions | [] | | project_id | 13c52c41e0e543d9841a3e761f981c20 | | query | {\"=\": {\"id\": \"94619081-abf5-4f1f-81c7-9cedaa872403\"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-12-09T05:18:53.326000 | | threshold | 80.0 | | time_constraints | [] | | timestamp | 2016-12-09T05:18:53.326000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 32d3f2c9a234423cb52fb69d3741dbbc | +---------------------------+-------------------------------------------------------+", "aodh alarm update --name cpu_usage_high --threshold 75", "aodh alarm update --name cpu_usage_high --enabled=false", "aodh alarm delete --name cpu_usage_high", "openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 745d33000ac74d30a77539f8920555e7 | admin | | 983739bb834a42ddb48124a38def8538 | services | | be9e767afd4c4b7ead1417c6dfedde2b | demo | +----------------------------------+----------+", "aodh alarm create --type gnocchi_aggregation_by_resources_threshold --name iops-monitor-read-requests --metric disk.read.requests.rate --threshold 42000 --aggregation-method sum --resource-type instance --query '{\"=\": {\"project_id\": \"745d33000ac74d30a77539f8920555e7\"}}' +---------------------------+-----------------------------------------------------------+ | Field | Value | +---------------------------+-----------------------------------------------------------+ | aggregation_method | sum | | alarm_actions | [] | | alarm_id | 192aba27-d823-4ede-a404-7f6b3cc12469 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 60 | | insufficient_data_actions | [] | | metric | disk.read.requests.rate | | name | iops-monitor-read-requests | | ok_actions | [] | | project_id | 745d33000ac74d30a77539f8920555e7 | | query | {\"=\": {\"project_id\": \"745d33000ac74d30a77539f8920555e7\"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-11-08T23:41:22.919000 | | threshold | 42000.0 | | time_constraints | [] | | timestamp | 2016-11-08T23:41:22.919000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 8c4aea738d774967b4ef388eb41fef5e | +---------------------------+-----------------------------------------------------------+", "gnocchi resource show --type instance d71cdf9a-51dc-4bba-8170-9cd95edd3f66 +-----------------------+---------------------------------------------------------------------+ | Field | Value | +-----------------------+---------------------------------------------------------------------+ | created_by_project_id | 44adccdc32614688ae765ed4e484f389 | | created_by_user_id | c24fa60e46d14f8d847fca90531b43db | | creator | c24fa60e46d14f8d847fca90531b43db:44adccdc32614688ae765ed4e484f389 | | display_name | test-instance | | ended_at | None | | flavor_id | 14c7c918-df24-481c-b498-0d3ec57d2e51 | | flavor_name | m1.tiny | | host | overcloud-compute-0 | | id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | image_ref | e75dff7b-3408-45c2-9a02-61fbfbf054d7 | | metrics | compute.instance.booting.time: c739a70d-2d1e-45c1-8c1b-4d28ff2403ac | | | cpu.delta: 700ceb7c-4cff-4d92-be2f-6526321548d6 | | | cpu: 716d6128-1ea6-430d-aa9c-ceaff2a6bf32 | | | cpu_l3_cache: 3410955e-c724-48a5-ab77-c3050b8cbe6e | | | cpu_util : b148c392-37d6-4c8f-8609-e15fc15a4728 | | | disk.allocation: 9dd464a3-acf8-40fe-bd7e-3cb5fb12d7cc | | | disk.capacity: c183d0da-e5eb-4223-a42e-855675dd1ec6 | | | disk.ephemeral.size: 15d1d828-fbb4-4448-b0f2-2392dcfed5b6 | | | disk.iops: b8009e70-daee-403f-94ed-73853359a087 | | | disk.latency: 1c648176-18a6-4198-ac7f-33ee628b82a9 | | | disk.read.bytes.rate: eb35828f-312f-41ce-b0bc-cb6505e14ab7 | | | disk.read.bytes: de463be7-769b-433d-9f22-f3265e146ec8 | | | disk.read.requests.rate: 588ca440-bd73-4fa9-a00c-8af67262f4fd | | | disk.read.requests: 53e5d599-6cad-47de-b814-5cb23e8aaf24 | | | disk.root.size: cee9d8b1-181e-4974-9427-aa7adb3b96d9 | | | disk.usage: 4d724c99-7947-4c6d-9816-abbbc166f6f3 | | | disk.write.bytes.rate: 45b8da6e-0c89-4a6c-9cce-c95d49d9cc8b | | | disk.write.bytes: c7734f1b-b43a-48ee-8fe4-8a31b641b565 | | | disk.write.requests.rate: 96ba2f22-8dd6-4b89-b313-1e0882c4d0d6 | | | disk.write.requests: 553b7254-be2d-481b-9d31-b04c93dbb168 | | | memory.bandwidth.local: 187f29d4-7c70-4ae2-86d1-191d11490aad | | | memory.bandwidth.total: eb09a4fc-c202-4bc3-8c94-aa2076df7e39 | | | memory.resident: 97cfb849-2316-45a6-9545-21b1d48b0052 | | | memory.swap.in: f0378d8f-6927-4b76-8d34-a5931799a301 | | | memory.swap.out: c5fba193-1a1b-44c8-82e3-9fdc9ef21f69 | | | memory.usage: 7958d06d-7894-4ca1-8c7e-72ba572c1260 | | | memory: a35c7eab-f714-4582-aa6f-48c92d4b79cd | | | perf.cache.misses: da69636d-d210-4b7b-bea5-18d4959e95c1 | | | perf.cache.references: e1955a37-d7e4-4b12-8a2a-51de4ec59efd | | | perf.cpu.cycles: 5d325d44-b297-407a-b7db-cc9105549193 | | | perf.instructions: 973d6c6b-bbeb-4a13-96c2-390a63596bfc | | | vcpus: 646b53d0-0168-4851-b297-05d96cc03ab2 | | original_resource_id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | project_id | 3cee262b907b4040b26b678d7180566b | | revision_end | None | | revision_start | 2017-11-16T04:00:27.081865+00:00 | | server_group | None | | started_at | 2017-11-16T01:09:20.668344+00:00 | | type | instance | | user_id | 1dbf5787b2ee46cf9fa6a1dfea9c9996 | +-----------------------+---------------------------------------------------------------------+", "gnocchi metric show --resource d71cdf9a-51dc-4bba-8170-9cd95edd3f66 cpu_util +------------------------------------+-------------------------------------------------------------------+ | Field | Value | +------------------------------------+-------------------------------------------------------------------+ | archive_policy/aggregation_methods | std, count, min, max, sum, mean | | archive_policy/back_window | 0 | | archive_policy/definition | - points: 8640, granularity: 0:05:00, timespan: 30 days, 0:00:00 | | archive_policy/name | low | | created_by_project_id | 44adccdc32614688ae765ed4e484f389 | | created_by_user_id | c24fa60e46d14f8d847fca90531b43db | | creator | c24fa60e46d14f8d847fca90531b43db:44adccdc32614688ae765ed4e484f389 | | id | b148c392-37d6-4c8f-8609-e15fc15a4728 | | name | cpu_util | | resource/created_by_project_id | 44adccdc32614688ae765ed4e484f389 | | resource/created_by_user_id | c24fa60e46d14f8d847fca90531b43db | | resource/creator | c24fa60e46d14f8d847fca90531b43db:44adccdc32614688ae765ed4e484f389 | | resource/ended_at | None | | resource/id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | resource/original_resource_id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | resource/project_id | 3cee262b907b4040b26b678d7180566b | | resource/revision_end | None | | resource/revision_start | 2017-11-17T00:05:27.516421+00:00 | | resource/started_at | 2017-11-16T01:09:20.668344+00:00 | | resource/type | instance | | resource/user_id | 1dbf5787b2ee46cf9fa6a1dfea9c9996 | | unit | None | +------------------------------------+-------------------------------------------------------------------+", "aodh alarm create --project-id 3cee262b907b4040b26b678d7180566b --name high-cpu --type gnocchi_resources_threshold --description 'High CPU usage' --metric cpu_util --threshold 80.0 --comparison-operator ge --aggregation-method mean --granularity 300 --evaluation-periods 1 --alarm-action 'log://' --ok-action 'log://' --resource-type instance --resource-id d71cdf9a-51dc-4bba-8170-9cd95edd3f66 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | aggregation_method | mean | | alarm_actions | [u'log://'] | | alarm_id | 1625015c-49b8-4e3f-9427-3c312a8615dd | | comparison_operator | ge | | description | High CPU usage | | enabled | True | | evaluation_periods | 1 | | granularity | 300 | | insufficient_data_actions | [] | | metric | cpu_util | | name | high-cpu | | ok_actions | [u'log://'] | | project_id | 3cee262b907b4040b26b678d7180566b | | repeat_actions | False | | resource_id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | resource_type | instance | | severity | low | | state | insufficient data | | state_reason | Not evaluated yet | | state_timestamp | 2017-11-16T05:20:48.891365 | | threshold | 80.0 | | time_constraints | [] | | timestamp | 2017-11-16T05:20:48.891365 | | type | gnocchi_resources_threshold | | user_id | 1dbf5787b2ee46cf9fa6a1dfea9c9996 | +---------------------------+--------------------------------------+", "aodh alarm-history show 1625015c-49b8-4e3f-9427-3c312a8615dd --fit-width +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | timestamp | type | detail | event_id | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | 2017-11-16T05:21:47.850094 | state transition | {\"transition_reason\": \"Transition to ok due to 1 samples inside threshold, most recent: 0.0366665763\", \"state\": \"ok\"} | 3b51f09d-ded1-4807-b6bb-65fdc87669e4 | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+", "gnocchi resource-type create testResource01 -a bla:string:True:min_length=123 +----------------+------------------------------------------------------------+ | Field | Value | +----------------+------------------------------------------------------------+ | attributes/bla | max_length=255, min_length=123, required=True, type=string | | name | testResource01 | | state | active | +----------------+------------------------------------------------------------+", "gnocchi resource-type show testResource01 +----------------+------------------------------------------------------------+ | Field | Value | +----------------+------------------------------------------------------------+ | attributes/bla | max_length=255, min_length=123, required=True, type=string | | name | testResource01 | | state | active | +----------------+------------------------------------------------------------+", "gnocchi resource-type delete testResource01" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/logging_monitoring_and_troubleshooting_guide/monitoring_using_the_telemetry_service
D.2. Host Tags
D.2. Host Tags In a cluster configuration, you can define host tags in the configuration files. If you set hosttags = 1 in the tags section, a host tag is automatically defined using the machine's host name. This allows you to use a common configuration file which can be replicated on all your machines so they hold identical copies of the file, but the behavior can differ between machines according to the host name. For information on the configuration files, see Appendix B, The LVM Configuration Files . For each host tag, an extra configuration file is read if it exists: lvm_ hosttag. conf. If that file defines new tags, then further configuration files will be appended to the list of files to read in. For example, the following entry in the configuration file always defines tag1 , and defines tag2 if the host name is host1 .
[ "tags { tag1 { } tag2 { host_list = [\"host1\"] } }" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/host_tags
Chapter 11. Forcing replication updates after an instance in a replication environment was offline
Chapter 11. Forcing replication updates after an instance in a replication environment was offline If you stop a Directory Server instance that is involved in replication for regular maintenance, the supplier must update the directory data immediately when it comes back online. You can enforce this update using the command line and the web console. 11.1. Forcing replication updates using the command line Perform the following steps on the suppliers to enforce replication updates for the dc=example,dc=com suffix in example-agreement . Prerequisites The replication is set up. The consumer has been initialized. Procedure Check if the replication agreement has an update schedule configured: # dsconf -D "cn=Directory Manager" ldap://server.example.com repl-agmt get --suffix " dc=example,dc=com " example-agreement If the output of the command contains nsDS5ReplicaUpdateSchedule: * or the nsDS5ReplicaUpdateSchedule parameter is not present, no update schedule is configured. If nsDS5ReplicaUpdateSchedule contains a schedule, such as shown in the following, note the value: nsDS5ReplicaUpdateSchedule: 0800-2200 0246 If an update schedule is configured, enter the following command to temporary disable it: # dsconf -D "cn=Directory Manager" ldap://server.example.com repl-agmt set --schedule \* --suffix " dc=example,dc=com " example-agreement Temporarily disable the replication agreement: # dsconf -D "cn=Directory Manager" ldap://server.example.com repl-agmt disable --suffix " dc=example,dc=com " example-agreement Re-enable the replication agreement to force the replication update: # dsconf -D "cn=Directory Manager" ldap://server.example.com repl-agmt enable --suffix " dc=example,dc=com " example-agreement If a replication schedule was configured at the beginning of this procedure, set the schedule to the value: # dsconf -D "cn=Directory Manager" ldap://server.example.com repl-agmt set --schedule "0800-2200 0246" --suffix " dc=example,dc=com " example-agreement Verification Display the replication status: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt status --suffix " dc=example,dc=com " example-agreement ... Last Update Start: 20210406120631Z Last Update End: 20210406120631Z Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded ... 11.2. Forcing replication updates using the web console Perform the following steps on the suppliers to enforce replication updates. Prerequisites The replication is set up. The consumer has been initialized You are logged in to the instance in the web console. Procedure Open the Replication menu. Select the dc=example,dc=com suffix. Open the Agreements tab. Check if the replication agreement has an update schedule configured: Click the overflow menu to the agreement, and select Edit Agreement . On the Scheduling tab, note the values that are currently set. If Use A Custom Schedule is not selected, no schedule is configured. Click the overflow menu to the replication agreement, and select Disable/Enable Agreement to disable the agreement. The status of the agreement in the State column is now Disabled . Click the overflow menu to the replication agreement again, and select Disable/Enable Agreement to re-enable the replication agreement and enforce the replication update. The status of the agreement in the State column is now Enabled . If a replication schedule was configured at the beginning of this procedure, set the schedule to the values: Click click the overflow menu, and select Actions Edit Agreement . On the Scheduling tab, set the values. Verification Display the replication status: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt status --suffix " dc=example,dc=com " example-agreement ... Last Update Start: 20210406120631Z Last Update End: 20210406120631Z Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded ...
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt get --suffix \" dc=example,dc=com \" example-agreement", "nsDS5ReplicaUpdateSchedule: 0800-2200 0246", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt set --schedule \\* --suffix \" dc=example,dc=com \" example-agreement", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt disable --suffix \" dc=example,dc=com \" example-agreement", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt enable --suffix \" dc=example,dc=com \" example-agreement", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-agmt set --schedule \"0800-2200 0246\" --suffix \" dc=example,dc=com \" example-agreement", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt status --suffix \" dc=example,dc=com \" example-agreement Last Update Start: 20210406120631Z Last Update End: 20210406120631Z Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt status --suffix \" dc=example,dc=com \" example-agreement Last Update Start: 20210406120631Z Last Update End: 20210406120631Z Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_forcing-replication-updates-after-an-instance-in-a-replication-environment-was-offline_configuring-and-managing-replication
Chapter 14. Forms in Business Central
Chapter 14. Forms in Business Central A form is a layout definition for a page, defined as HTML, that is displayed as a dialog window to the user during process and task instantiation. Task forms acquire data from a user for both the process and task instance execution, whereas process forms take input and output from process variables. The input is then mapped to the task using the data input assignment, which you can use inside of a task. When the task is completed, the data is mapped as a data output assignment to provide the data to the parent process instance. 14.1. Form Modeler Red Hat Process Automation Manager provides a custom editor for defining forms called Form Modeler. With Form Modeler, you can generate forms for data objects, task forms, and process start forms without writing code. Form Modeler includes a widget library for binding multiple data types and a callback mechanism to send notifications when form values change. Form Modeler uses bean-based validation and supports binding form fields to static or dynamic models. Form Modeler includes the following features: Form modeling user interface for forms Form auto-generation from the data model or Java objects Data binding for Java objects Formula and expressions Customized forms layouts Forms embedding Form Modeler comes with predefined field types that you place onto the canvas to create a form. Figure 14.1. Example mortgage loan application form 14.2. Generating process and task forms in Business Central You can generate a process form from your business process that is displayed at process instantiation to the user who instantiated the process. You can also generate a task form from your business process that is displayed at user task instantiation, when the execution flow reaches the task, to the actor of the user task. Procedure In Business Central, go to Menu Design Projects . Click the project name to open the asset view and then click the business process name. In the process designer, click the process task that you want to create a form for (if applicable). In the upper-right toolbar, click the Form Generation icon and select the forms that you want to generate: Generate process form : Generates the form for the entire process. This is the initial form that a user must complete when the process instance is started. Generate all forms : Generates the form for the entire process and for all user tasks. Generate forms for selection : Generates the forms for the selected user task nodes. Figure 14.2. Form generation menu The forms are created in the root directory of your project. Go to the root directory of your project in Business Central, click the new form name, and use the Form Modeler to customize the form to meet your requirements. 14.3. Manually creating forms in Business Central You can create task and process forms manually from your project asset view. This is another way to generate a form without selecting to generate forms from your business process. For example, the Form Modeler now supports creating forms from external data objects. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Form . Provide the following information in the Create new Form window: Form name (must be unique) Package name Model type: Select either Business Process or Data Object . For the Business Process model type, select your business process from the Select Process drop-down menu, and then select the form that you want to create from the Select Form drop-down menu. For the Data Object model type, select one of your project data objects from the Select Data Object from Project drop-down menu. Click Ok to open the Form Modeler. In the Components view on the left side of the Form Modeler, expand the Model Fields and Form Controls menus and create a new form by dragging your required fields and form controls to the canvas. Click Save to save your changes. 14.4. Document attachments in a form or process Red Hat Process Automation Manager supports document attachments in forms using the Document form field. With the Document form field, you can upload documents that are required as part of a form or process. To enable document attachments in forms and processes, complete the following procedures: Set the document marshalling strategy. Create a document variable in the business process. Map the task inputs and outputs to the document variable. 14.4.1. Setting the document marshalling strategy The document marshalling strategy for your project determines where documents are stored for use with forms and processes. The default document marshalling strategy in Red Hat Process Automation Manager is org.jbpm.document.marshalling.DocumentMarshallingStrategy . This strategy uses a DocumentStorageServiceImpl class that stores documents locally in your PROJECT_HOME /.docs folder. You can set this document marshalling strategy or a custom document marshalling strategy for your project in Business Central or in the kie-deployment-descriptor.xml file. Procedure In Business Central, go to Menu Design Projects . Select a project. The project Assets window opens. Click the Settings tab. Figure 14.3. Settings tab Click Deployments Marshalling Strategies -> Add Marshalling Strategy . In the Name field, enter the identifier of a document marshalling strategy, and in the Resolver drop-down menu, select the corresponding resolver type: For single documents: Enter org.jbpm.document.marshalling.DocumentMarshallingStrategy as the document marshalling strategy and set the resolver type to Reflection . For multiple documents: Enter new org.jbpm.document.marshalling.DocumentCollectionImplMarshallingStrategy(new org.jbpm.document.marshalling.DocumentMarshallingStrategy()) as the document marshalling strategy and set the resolver type to MVEL . For custom document support: Enter the identifier of the custom document marshalling strategy and select the relevant resolver type. Click Test to validate your deployment descriptor file. Click Deploy to build and deploy the updated project. Alternatively, if you are not using Business Central, you can navigate to PROJECT_HOME /src/main/resources/META_INF/kie-deployment-descriptor.xml (if applicable) and edit the deployment descriptor file with the required <marshalling-strategies> elements. Click Save . Example deployment descriptor file with document marshalling strategy for multiple documents <deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <persistence-unit>org.jbpm.domain</persistence-unit> <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit> <audit-mode>JPA</audit-mode> <persistence-mode>JPA</persistence-mode> <runtime-strategy>SINGLETON</runtime-strategy> <marshalling-strategies> <marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.jbpm.document.marshalling.DocumentCollectionImplMarshallingStrategy(new org.jbpm.document.marshalling.DocumentMarshallingStrategy());</identifier> </marshalling-strategy> </marshalling-strategies> 14.4.1.1. Using a custom document marshalling strategy for a content management system (CMS) The document marshalling strategy for your project determines where documents are stored for use with forms and processes. The default document marshalling strategy in Red Hat Process Automation Manager is org.jbpm.document.marshalling.DocumentMarshallingStrategy . This strategy uses a DocumentStorageServiceImpl class that stores documents locally in your PROJECT_HOME /.docs folder. If you want to store form and process documents in a custom location, such as in a centralized content management system (CMS), add a custom document marshalling strategy to your project. You can set this document marshalling strategy in Business Central or in the kie-deployment-descriptor.xml file directly. Procedure Create a custom marshalling strategy .java file that includes an implementation of the org.kie.api.marshalling.ObjectMarshallingStrategy interface. This interface enables you to implement the variable persistence required for your custom document marshalling strategy. The following methods in this interface help you create your strategy: boolean accept(Object object) : Determines if the specified object can be marshalled by the strategy byte[] marshal(Context context, ObjectOutputStream os, Object object) : Marshals the specified object and returns the marshalled object as byte[] Object unmarshal(Context context, ObjectInputStream is, byte[] object, ClassLoader classloader) : Reads the object received as byte[] and returns the unmarshalled object void write(ObjectOutputStream os, Object object) : Same as the marshal method, provided for backward compatibility Object read(ObjectInputStream os) : Same as the unmarshal method, provided for backward compatibility The following code sample is an example ObjectMarshallingStrategy implementation for storing and retrieving data from a Content Management Interoperability Services (CMIS) system: Example implementation for storing and retrieving data from a CMIS system package org.jbpm.integration.cmis.impl; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.util.HashMap; import org.apache.chemistry.opencmis.client.api.Folder; import org.apache.chemistry.opencmis.client.api.Session; import org.apache.chemistry.opencmis.commons.data.ContentStream; import org.apache.commons.io.IOUtils; import org.drools.core.common.DroolsObjectInputStream; import org.jbpm.document.Document; import org.jbpm.integration.cmis.UpdateMode; import org.kie.api.marshalling.ObjectMarshallingStrategy; public class OpenCMISPlaceholderResolverStrategy extends OpenCMISSupport implements ObjectMarshallingStrategy { private String user; private String password; private String url; private String repository; private String contentUrl; private UpdateMode mode = UpdateMode.OVERRIDE; public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository) { this.user = user; this.password = password; this.url = url; this.repository = repository; } public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, UpdateMode mode) { this.user = user; this.password = password; this.url = url; this.repository = repository; this.mode = mode; } public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, String contentUrl) { this.user = user; this.password = password; this.url = url; this.repository = repository; this.contentUrl = contentUrl; } public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, String contentUrl, UpdateMode mode) { this.user = user; this.password = password; this.url = url; this.repository = repository; this.contentUrl = contentUrl; this.mode = mode; } public boolean accept(Object object) { if (object instanceof Document) { return true; } return false; } public byte[] marshal(Context context, ObjectOutputStream os, Object object) throws IOException { Document document = (Document) object; Session session = getRepositorySession(user, password, url, repository); try { if (document.getContent() != null) { String type = getType(document); if (document.getIdentifier() == null || document.getIdentifier().isEmpty()) { String location = getLocation(document); Folder parent = findFolderForPath(session, location); if (parent == null) { parent = createFolder(session, null, location); } org.apache.chemistry.opencmis.client.api.Document doc = createDocument(session, parent, document.getName(), type, document.getContent()); document.setIdentifier(doc.getId()); document.addAttribute("updated", "true"); } else { if (document.getContent() != null && "true".equals(document.getAttribute("updated"))) { org.apache.chemistry.opencmis.client.api.Document doc = updateDocument(session, document.getIdentifier(), type, document.getContent(), mode); document.setIdentifier(doc.getId()); document.addAttribute("updated", "false"); } } } ByteArrayOutputStream buff = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream( buff ); oos.writeUTF(document.getIdentifier()); oos.writeUTF(object.getClass().getCanonicalName()); oos.close(); return buff.toByteArray(); } finally { session.clear(); } } public Object unmarshal(Context context, ObjectInputStream ois, byte[] object, ClassLoader classloader) throws IOException, ClassNotFoundException { DroolsObjectInputStream is = new DroolsObjectInputStream( new ByteArrayInputStream( object ), classloader ); String objectId = is.readUTF(); String canonicalName = is.readUTF(); Session session = getRepositorySession(user, password, url, repository); try { org.apache.chemistry.opencmis.client.api.Document doc = (org.apache.chemistry.opencmis.client.api.Document) findObjectForId(session, objectId); Document document = (Document) Class.forName(canonicalName).newInstance(); document.setAttributes(new HashMap<String, String>()); document.setIdentifier(objectId); document.setName(doc.getName()); document.setLastModified(doc.getLastModificationDate().getTime()); document.setSize(doc.getContentStreamLength()); document.addAttribute("location", getFolderName(doc.getParents()) + getPathAsString(doc.getPaths())); if (doc.getContentStream() != null && contentUrl == null) { ContentStream stream = doc.getContentStream(); document.setContent(IOUtils.toByteArray(stream.getStream())); document.addAttribute("updated", "false"); document.addAttribute("type", stream.getMimeType()); } else { document.setLink(contentUrl + document.getIdentifier()); } return document; } catch(Exception e) { throw new RuntimeException("Cannot read document from CMIS", e); } finally { is.close(); session.clear(); } } public Context createContext() { return null; } // For backward compatibility with serialization mechanism public void write(ObjectOutputStream os, Object object) throws IOException { Document document = (Document) object; Session session = getRepositorySession(user, password, url, repository); try { if (document.getContent() != null) { String type = document.getAttribute("type"); if (document.getIdentifier() == null) { String location = document.getAttribute("location"); Folder parent = findFolderForPath(session, location); if (parent == null) { parent = createFolder(session, null, location); } org.apache.chemistry.opencmis.client.api.Document doc = createDocument(session, parent, document.getName(), type, document.getContent()); document.setIdentifier(doc.getId()); document.addAttribute("updated", "false"); } else { if (document.getContent() != null && "true".equals(document.getAttribute("updated"))) { org.apache.chemistry.opencmis.client.api.Document doc = updateDocument(session, document.getIdentifier(), type, document.getContent(), mode); document.setIdentifier(doc.getId()); document.addAttribute("updated", "false"); } } } ByteArrayOutputStream buff = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream( buff ); oos.writeUTF(document.getIdentifier()); oos.writeUTF(object.getClass().getCanonicalName()); oos.close(); } finally { session.clear(); } } public Object read(ObjectInputStream os) throws IOException, ClassNotFoundException { String objectId = os.readUTF(); String canonicalName = os.readUTF(); Session session = getRepositorySession(user, password, url, repository); try { org.apache.chemistry.opencmis.client.api.Document doc = (org.apache.chemistry.opencmis.client.api.Document) findObjectForId(session, objectId); Document document = (Document) Class.forName(canonicalName).newInstance(); document.setIdentifier(objectId); document.setName(doc.getName()); document.addAttribute("location", getFolderName(doc.getParents()) + getPathAsString(doc.getPaths())); if (doc.getContentStream() != null) { ContentStream stream = doc.getContentStream(); document.setContent(IOUtils.toByteArray(stream.getStream())); document.addAttribute("updated", "false"); document.addAttribute("type", stream.getMimeType()); } return document; } catch(Exception e) { throw new RuntimeException("Cannot read document from CMIS", e); } finally { session.clear(); } } } In Business Central, go to Menu Design Projects . Click the project name and click Settings . Figure 14.4. Settings tab Click Deployments Marshalling Strategies -> Add Marshalling Strategy . In the Name field, enter the identifier of the custom document marshalling strategy, such as org.jbpm.integration.cmis.impl.OpenCMISPlaceholderResolverStrategy in this example. Select the relevant option from the Resolver drop-down menu, such as Reflection in this example. Click Test to validate your deployment descriptor file. Click Deploy to build and deploy the updated project. Alternatively, if you are not using Business Central, you can navigate to PROJECT_HOME /src/main/resources/META_INF/kie-deployment-descriptor.xml (if applicable) and edit the deployment descriptor file with the required <marshalling-strategies> elements. Example deployment descriptor file with custom document marshalling strategy <deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <persistence-unit>org.jbpm.domain</persistence-unit> <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit> <audit-mode>JPA</audit-mode> <persistence-mode>JPA</persistence-mode> <runtime-strategy>SINGLETON</runtime-strategy> <marshalling-strategies> <marshalling-strategy> <resolver>reflection</resolver> <identifier> org.jbpm.integration.cmis.impl.OpenCMISPlaceholderResolverStrategy </identifier> </marshalling-strategy> </marshalling-strategies> To enable documents stored in a custom location to be attached to forms and processes, create a document variable in the relevant processes and map task inputs and outputs to that document variable in Business Central. 14.4.2. Creating a document variable in a business process After you set a document marshalling strategy, create a document variable in the related process to upload documents to a human task and for the document or documents to be visible in the Process Instances view in Business Central. Prerequisites You have set a document marshalling strategy as described in Section 14.4.1, "Setting the document marshalling strategy" . Procedure In Business Central, go to Menu Design Projects . Click the project name to open the asset view and click the business process name. Click the canvas and click on the right side of the window to open the Properties panel. Expand Process Data and click and enter the following values: Name : document Custom Type : org.jbpm.document.Document for a single document or org.jbpm.document.DocumentCollection for multiple documents 14.4.3. Mapping task inputs and outputs to the document variable If you want to view or modify the attachments inside of task forms, create assignments inside of the task inputs and outputs. Prerequisites You have a project that contains a business process asset that has at least one user task. Procedure In Business Central, go to Menu Design Projects . Click the project name to open the asset view and click the business process name. Click a user task and click on the right side of the window to open the Properties panel. Expand Implementation/Execution and to Assignments , click to open the Data I/O window. to Data Inputs and Assignments , click Add and enter the following values: Name : taskdoc_in Data Type : org.jbpm.document.Document for a single document or org.jbpm.document.DocumentCollection for multiple documents Source : document to Data Outputs and Assignments , click Add and enter the following values: Name : taskdoc_out Data Type : org.jbpm.document.Document for a single document or org.jbpm.document.DocumentCollection for multiple documents Target : document The Source and Target fields contain the name of the process variable you created earlier. Click Save .
[ "<deployment-descriptor xsi:schemaLocation=\"http://www.jboss.org/jbpm deployment-descriptor.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <persistence-unit>org.jbpm.domain</persistence-unit> <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit> <audit-mode>JPA</audit-mode> <persistence-mode>JPA</persistence-mode> <runtime-strategy>SINGLETON</runtime-strategy> <marshalling-strategies> <marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.jbpm.document.marshalling.DocumentCollectionImplMarshallingStrategy(new org.jbpm.document.marshalling.DocumentMarshallingStrategy());</identifier> </marshalling-strategy> </marshalling-strategies>", "package org.jbpm.integration.cmis.impl; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.util.HashMap; import org.apache.chemistry.opencmis.client.api.Folder; import org.apache.chemistry.opencmis.client.api.Session; import org.apache.chemistry.opencmis.commons.data.ContentStream; import org.apache.commons.io.IOUtils; import org.drools.core.common.DroolsObjectInputStream; import org.jbpm.document.Document; import org.jbpm.integration.cmis.UpdateMode; import org.kie.api.marshalling.ObjectMarshallingStrategy; public class OpenCMISPlaceholderResolverStrategy extends OpenCMISSupport implements ObjectMarshallingStrategy { private String user; private String password; private String url; private String repository; private String contentUrl; private UpdateMode mode = UpdateMode.OVERRIDE; public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository) { this.user = user; this.password = password; this.url = url; this.repository = repository; } public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, UpdateMode mode) { this.user = user; this.password = password; this.url = url; this.repository = repository; this.mode = mode; } public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, String contentUrl) { this.user = user; this.password = password; this.url = url; this.repository = repository; this.contentUrl = contentUrl; } public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, String contentUrl, UpdateMode mode) { this.user = user; this.password = password; this.url = url; this.repository = repository; this.contentUrl = contentUrl; this.mode = mode; } public boolean accept(Object object) { if (object instanceof Document) { return true; } return false; } public byte[] marshal(Context context, ObjectOutputStream os, Object object) throws IOException { Document document = (Document) object; Session session = getRepositorySession(user, password, url, repository); try { if (document.getContent() != null) { String type = getType(document); if (document.getIdentifier() == null || document.getIdentifier().isEmpty()) { String location = getLocation(document); Folder parent = findFolderForPath(session, location); if (parent == null) { parent = createFolder(session, null, location); } org.apache.chemistry.opencmis.client.api.Document doc = createDocument(session, parent, document.getName(), type, document.getContent()); document.setIdentifier(doc.getId()); document.addAttribute(\"updated\", \"true\"); } else { if (document.getContent() != null && \"true\".equals(document.getAttribute(\"updated\"))) { org.apache.chemistry.opencmis.client.api.Document doc = updateDocument(session, document.getIdentifier(), type, document.getContent(), mode); document.setIdentifier(doc.getId()); document.addAttribute(\"updated\", \"false\"); } } } ByteArrayOutputStream buff = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream( buff ); oos.writeUTF(document.getIdentifier()); oos.writeUTF(object.getClass().getCanonicalName()); oos.close(); return buff.toByteArray(); } finally { session.clear(); } } public Object unmarshal(Context context, ObjectInputStream ois, byte[] object, ClassLoader classloader) throws IOException, ClassNotFoundException { DroolsObjectInputStream is = new DroolsObjectInputStream( new ByteArrayInputStream( object ), classloader ); String objectId = is.readUTF(); String canonicalName = is.readUTF(); Session session = getRepositorySession(user, password, url, repository); try { org.apache.chemistry.opencmis.client.api.Document doc = (org.apache.chemistry.opencmis.client.api.Document) findObjectForId(session, objectId); Document document = (Document) Class.forName(canonicalName).newInstance(); document.setAttributes(new HashMap<String, String>()); document.setIdentifier(objectId); document.setName(doc.getName()); document.setLastModified(doc.getLastModificationDate().getTime()); document.setSize(doc.getContentStreamLength()); document.addAttribute(\"location\", getFolderName(doc.getParents()) + getPathAsString(doc.getPaths())); if (doc.getContentStream() != null && contentUrl == null) { ContentStream stream = doc.getContentStream(); document.setContent(IOUtils.toByteArray(stream.getStream())); document.addAttribute(\"updated\", \"false\"); document.addAttribute(\"type\", stream.getMimeType()); } else { document.setLink(contentUrl + document.getIdentifier()); } return document; } catch(Exception e) { throw new RuntimeException(\"Cannot read document from CMIS\", e); } finally { is.close(); session.clear(); } } public Context createContext() { return null; } // For backward compatibility with previous serialization mechanism public void write(ObjectOutputStream os, Object object) throws IOException { Document document = (Document) object; Session session = getRepositorySession(user, password, url, repository); try { if (document.getContent() != null) { String type = document.getAttribute(\"type\"); if (document.getIdentifier() == null) { String location = document.getAttribute(\"location\"); Folder parent = findFolderForPath(session, location); if (parent == null) { parent = createFolder(session, null, location); } org.apache.chemistry.opencmis.client.api.Document doc = createDocument(session, parent, document.getName(), type, document.getContent()); document.setIdentifier(doc.getId()); document.addAttribute(\"updated\", \"false\"); } else { if (document.getContent() != null && \"true\".equals(document.getAttribute(\"updated\"))) { org.apache.chemistry.opencmis.client.api.Document doc = updateDocument(session, document.getIdentifier(), type, document.getContent(), mode); document.setIdentifier(doc.getId()); document.addAttribute(\"updated\", \"false\"); } } } ByteArrayOutputStream buff = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream( buff ); oos.writeUTF(document.getIdentifier()); oos.writeUTF(object.getClass().getCanonicalName()); oos.close(); } finally { session.clear(); } } public Object read(ObjectInputStream os) throws IOException, ClassNotFoundException { String objectId = os.readUTF(); String canonicalName = os.readUTF(); Session session = getRepositorySession(user, password, url, repository); try { org.apache.chemistry.opencmis.client.api.Document doc = (org.apache.chemistry.opencmis.client.api.Document) findObjectForId(session, objectId); Document document = (Document) Class.forName(canonicalName).newInstance(); document.setIdentifier(objectId); document.setName(doc.getName()); document.addAttribute(\"location\", getFolderName(doc.getParents()) + getPathAsString(doc.getPaths())); if (doc.getContentStream() != null) { ContentStream stream = doc.getContentStream(); document.setContent(IOUtils.toByteArray(stream.getStream())); document.addAttribute(\"updated\", \"false\"); document.addAttribute(\"type\", stream.getMimeType()); } return document; } catch(Exception e) { throw new RuntimeException(\"Cannot read document from CMIS\", e); } finally { session.clear(); } } }", "<deployment-descriptor xsi:schemaLocation=\"http://www.jboss.org/jbpm deployment-descriptor.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <persistence-unit>org.jbpm.domain</persistence-unit> <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit> <audit-mode>JPA</audit-mode> <persistence-mode>JPA</persistence-mode> <runtime-strategy>SINGLETON</runtime-strategy> <marshalling-strategies> <marshalling-strategy> <resolver>reflection</resolver> <identifier> org.jbpm.integration.cmis.impl.OpenCMISPlaceholderResolverStrategy </identifier> </marshalling-strategy> </marshalling-strategies>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/business-process-forms_business-processes
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes Bring-Your-Own-Host (BYOH) allows for users to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users looking to mitigate major disruptions in the event that a Windows server goes offline. 9.1. Configuring a BYOH Windows instance Creating a BYOH Windows instance requires creating a config map in the Windows Machine Config Operator (WMCO) namespace. Prerequisites Any Windows instances that are to be attached to the cluster as a node must fulfill the following requirements: The instance must be on the same network as the Linux worker nodes in the cluster. Port 22 must be open and running an SSH server. The default shell for the SSH server must be the Windows Command shell , or cmd.exe . Port 10250 must be open for log collection. An administrator user is present with the private key used in the secret set as an authorized SSH key. If you are creating a BYOH Windows instance for an installer-provisioned infrastructure (IPI) AWS cluster, you must add a tag to the AWS instance that matches the spec.template.spec.value.tag value in the compute machine set for your worker nodes. For example, kubernetes.io/cluster/<cluster_id>: owned or kubernetes.io/cluster/<cluster_id>: shared . If you are creating a BYOH Windows instance on vSphere, communication with the internal API server must be enabled. The hostname of the instance must follow the RFC 1123 DNS label requirements, which include the following standards: Contains only lowercase alphanumeric characters or '-'. Starts with an alphanumeric character. Ends with an alphanumeric character. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because the WMCO installs and manages the runtime, it is recommended that you not manually install containerd on nodes. Procedure Create a ConfigMap named windows-instances in the WMCO namespace that describes the Windows instances to be added. Note Format each entry in the config map's data section by using the address as the key while formatting the value as username=<username> . Example config map kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core 1 The address that the WMCO uses to reach the instance over SSH, either a DNS name or an IPv4 address. A DNS PTR record must exist for this address. It is recommended that you use a DNS name with your BYOH instance if your organization uses DHCP to assign IP addresses. If not, you need to update the windows-instances ConfigMap whenever the instance is assigned a new IP address. 2 The name of the administrator user created in the prerequisites. 9.2. Removing BYOH Windows instances You can remove BYOH instances attached to the cluster by deleting the instance's entry in the config map. Deleting an instance reverts that instance back to its state prior to adding to the cluster. Any logs and container runtime artifacts are not added to these instances. For an instance to be cleanly removed, it must be accessible with the current private key provided to WMCO. For example, to remove the 10.1.42.1 instance from the example, the config map would be changed to the following: kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core Deleting windows-instances is viewed as a request to deconstruct all Windows instances added as nodes.
[ "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core", "kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/windows_container_support_for_openshift/byoh-windows-instance
8.181. rhnlib
8.181. rhnlib 8.181.1. RHBA-2013:1085 - rhnlib bug fix and enhancement update Updated rhnlib packages that fix one bug are now available. The rhnlib packages contain Python libraries developed specifically for interfacing with the Red Hat Network. Bug Fix BZ#949650 The RHN Proxy did not work properly if separated from a parent by a slow enough network. Consequently, users who attempted to download larger repodata files and RPMs experienced timeouts. This update changes both RHN Proxy and Red Hat Enterprise Linux RHN Client to allow all communications to obey a configured timeout value for connections. Users of rhnlib are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rhnlib
Chapter 10. Troubleshooting
Chapter 10. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 10.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.11 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 10.2. MTC custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 10.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 10.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 10.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 10.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 10.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 10.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 10.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 10.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 10.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 10.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 10.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 10.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 10.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 10.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 10.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 10.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 10.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 10.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 10.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 10.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 10.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 10.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 10.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 10.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 10.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past hour: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 \ -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . 10.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 10.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 10.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 10.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 10.4.1. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 10.4.2. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 10.4.2.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 10.4.2.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 10.4.2.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 10.4.2.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 10.4.2.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 10.4.2.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 10.4.2.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 10.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 10.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 10.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 10.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console
[ "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migration_toolkit_for_containers/troubleshooting-mtc
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/updating_openshift_data_foundation/making-open-source-more-inclusive
Chapter 1. Understanding DCN
Chapter 1. Understanding DCN Note An upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to RHOSP 17.1 is not supported for Distributed Compute Node (DCN) deployments. Distributed compute node (DCN) architecture is for edge use cases allowing remote compute and storage nodes to be deployed remotely while sharing a common centralised control plane. DCN architecture allows you to position workloads strategically closer to your operational needs for higher performance. The central location can consist of any role, however at a minimum, requires three controllers. Compute nodes can exist at the edge, as well as at the central location. DCN architecture is a hub and spoke routed network deployment. DCN is comparable to a spine and leaf deployment for routed provisioning and control plane networking with Red Hat OpenStack Platform director. The hub is the central site with core routers and a datacenter gateway (DC-GW). The spoke is the remote edge, or leaf. Edge locations do not have controllers, making them architecturally different from traditional deployments of Red Hat OpenStack Platform: Control plane services run remotely, at the central location. Pacemaker is not installed. The Block Storage service (cinder) runs in active/active mode. Etcd is deployed as a distributed lock manager (DLM). 1.1. Required software for distributed compute node architecture The following table shows the software and minimum versions required to deploy Red Hat OpenStack Platform in a distributed compute node (DCN) architecture: Platform Version Optional Red Hat Enterprise Linux 8 No Red Hat OpenStack Platform 16.1 No Red Hat Ceph Storage 4 Yes 1.2. Multistack design When you deploy Red Hat OpenStack Platform (RHOSP) with a DCN design, you use Red Hat director's capabilities for multiple stack deployment and management to deploy each site as a distinct stack. Managing a DCN architecture as a single stack is unsupported, unless the deployment is an upgrade from Red Hat OpenStack Platform 13. There are no supported methods to split an existing stack, however you can add stacks to a pre-existing deployment. For more information, see Section A.3, "Migrating to a multistack deployment" . The central location is a traditional stack deployment of RHOSP, however you are not required to deploy Compute nodes or Red Hat Ceph storage with the central stack. With DCN, you deploy each location as a distinct availability zone (AZ). 1.3. DCN storage You can deploy each edge site, either without storage, or with Ceph on hyperconverged nodes. The storage you deploy is dedicated to the site you deploy it on. DCN architecture uses Glance multistore. For edge sites deployed without storage, additional tooling is available so that you can cache and store images in the Compute service (nova) cache. Caching glance images in nova provides the faster boot times for instances by avoiding the process of downloading images across a WAN link. For more information, see Chapter 10, Precaching glance images into nova . 1.4. DCN edge With Distributed Compute Node architecture, the central location is deployed with the control nodes that manage the edge locations. When you then deploy an edge location, you deploy only compute nodes, making edge sites architecturally different from traditional deployments of Red Hat OpenStack Platform. At edge locations: Control plane services run remotely at the central location. Pacemaker does not run at DCN sites. The Block Storage service (cinder) runs in active/active mode. Etcd is deployed as a distributed lock manager (DLM).
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/distributed_compute_node_and_storage_deployment/understanding_dcn
Part IV. Activating and opening the subscriptions service
Part IV. Activating and opening the subscriptions service After you complete the steps to set up the environment for the subscriptions service, you can go to console.redhat.com to request the subscriptions service activation. After activation and the initial data collection cycle, you can open the subscriptions service and begin viewing usage data. Do these steps To find out if the subscriptions service activation is needed, see the following information: Determining whether manual activation of the subscriptions service is necessary To log in to console.redhat.com and activate the subscriptions service, see the following information: Activating the subscriptions service To log in to console.redhat.com and open the subscriptions service after activation, see the following information: Logging in to the subscriptions service If you cannot activate or log in to the subscriptions service, see the following information: Verifying access to the subscriptions service
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/assembly-activating-opening-subscriptionwatch
4.5. Controlling LVM Device Scans with Filters
4.5. Controlling LVM Device Scans with Filters At startup, the vgscan command is run to scan the block devices on the system looking for LVM labels, to determine which of them are physical volumes and to read the metadata and build up a list of volume groups. The names of the physical volumes are stored in the LVM cache file of each node in the system, /etc/lvm/cache/.cache . Subsequent commands may read that file to avoiding rescanning. You can control which devices LVM scans by setting up filters in the lvm.conf configuration file. The filters in the lvm.conf file consist of a series of simple regular expressions that get applied to the device names in the /dev directory to decide whether to accept or reject each block device found. The following examples show the use of filters to control which devices LVM scans. Note that some of these examples do not necessarily represent recommended practice, as the regular expressions are matched freely against the complete pathname. For example, a/loop/ is equivalent to a/.*loop.*/ and would match /dev/solooperation/lvol1 . The following filter adds all discovered devices, which is the default behavior as there is no filter configured in the configuration file: The following filter removes the cdrom device in order to avoid delays if the drive contains no media: The following filter adds all loop and removes all other block devices: The following filter adds all loop and IDE and removes all other block devices: The following filter adds just partition 8 on the first IDE drive and removes all other block devices: Note When the lvmetad daemon is running, the filter = setting in the /etc/lvm/lvm.conf file does not apply when you execute the pvscan --cache device command. To filter devices, you need to use the global_filter = setting. Devices that fail the global filter are not opened by LVM and are never scanned. You may need to use a global filter, for example, when you use LVM devices in VMs and you do not want the contents of the devices in the VMs to be scanned by the physical host. For more information on the lvm.conf file, see Appendix B, The LVM Configuration Files and the lvm.conf (5) man page.
[ "filter = [ \"a/.*/\" ]", "filter = [ \"r|/dev/cdrom|\" ]", "filter = [ \"a/loop.*/\", \"r/.*/\" ]", "filter =[ \"a|loop.*|\", \"a|/dev/hd.*|\", \"r|.*|\" ]", "filter = [ \"a|^/dev/hda8USD|\", \"r/.*/\" ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_filters
Chapter 2. Image Registry Operator in Red Hat OpenShift Service on AWS
Chapter 2. Image Registry Operator in Red Hat OpenShift Service on AWS 2.1. Image Registry on Red Hat OpenShift Service on AWS The Image Registry Operator installs a single instance of the OpenShift image registry, and manages all registry configuration, including setting up registry storage. After the control plane deploys in the management cluster, the Operator creates a default configs.imageregistry.operator.openshift.io resource instance based on configuration detected in the cluster. If insufficient information is available to define a complete configs.imageregistry.operator.openshift.io resource, the incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location as well. All configuration and workload resources for the registry reside in that namespace.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/registry/configuring-registry-operator
Chapter 6. Creating Windows machine sets
Chapter 6. Creating Windows machine sets 6.1. Creating a Windows machine set on AWS You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. Use one of the following aws commands, as appropriate for your Windows Server release, to query valid AMI images: Example Windows Server 2022 command USD aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2022*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table Example Windows Server 2019 command USD aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2019*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table where: <aws_region_name> Specifies the name of your AWS region. For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 6.1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.17 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.17 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.1.2. Sample YAML for a Windows MachineSet object on AWS This sample YAML defines a Windows MachineSet object running on Amazon Web Services (AWS) that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api 1 3 5 10 13 14 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the AMI ID of a supported Windows image with a container runtime installed. Note For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 11 Specify the AWS zone, like us-east-1a . 12 Specify the AWS region, like us-east-1 . 16 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 6.1.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. In disconnected environments, the image specified in the MachineSet custom resource (CR) must have the OpenSSH server v0.0.1.0 installed . Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.1.4. Additional resources Overview of machine management 6.2. Creating a Windows machine set on Azure You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.2.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.17 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.17 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.2.2. Sample YAML for a Windows MachineSet object on Azure This sample YAML defines a Windows MachineSet object running on Microsoft Azure that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: "" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: "<zone>" 16 1 3 5 11 12 13 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows compute machine set name. Windows machine names on Azure cannot be more than 15 characters long. Therefore, the compute machine set name cannot be more than 9 characters long, due to the way machine names are generated from it. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify a WindowsServer image offering that defines the 2019-Datacenter-with-Containers SKU. 10 Specify the Azure region, like centralus . 14 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 16 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 6.2.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. In disconnected environments, the image specified in the MachineSet custom resource (CR) must have the OpenSSH server v0.0.1.0 installed . Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.2.4. Additional resources Overview of machine management 6.3. Creating a Windows machine set on GCP You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.3.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.17 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.17 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.3.2. Sample YAML for a Windows MachineSet object on GCP This sample YAML file defines a Windows MachineSet object running on Google Cloud Platform (GCP) that the Windows Machine Config Operator (WMCO) can use. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14 1 3 5 10 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone suffix (such as a ). 7 Configure the machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the full path to an image of a supported version of Windows Server. 11 Specify the GCP project that this cluster was created in. 12 Specify the GCP region, such as us-central1 . 13 Created by the WMCO when it configures the first Windows machine. After that, the windows-user-data is available for all subsequent machine sets to consume. 14 Specify the zone within the chosen region, such as us-central1-a . 6.3.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.3.4. Additional resources Overview of machine management 6.4. Creating a Windows MachineSet object on Nutanix You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. You added a new DNS entry for the internal API server URL, api-int.<cluster_name>.<base_domain> , that points to the external API server URL, api.<cluster_name>.<base_domain> . This can be a CNAME or an additional A record. 6.4.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.17 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.17 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.4.2. Sample YAML for a Windows MachineSet object on Nutanix This sample YAML defines a Windows MachineSet object running on Nutanix that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 9 categories: null cluster: 10 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 11 image: 12 name: <image_id> type: name kind: NutanixMachineProviderConfig 13 memorySize: 16Gi 14 project: type: "" subnets: 15 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 16 userDataSecret: name: windows-user-data 17 vcpuSockets: 4 18 vcpusPerSocket: 1 19 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.17. 10 Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 11 Specifies the secret name for the cluster. Do not change this value. 12 Specifies the image to use. Use an image from an existing default compute machine set for the cluster. 13 Specifies the cloud provider platform type. Do not change this value. 14 Specifies the amount of memory for the cluster in Gi. 15 Specifies a subnet configuration. In this example, the subnet type is uuid , so there is a uuid stanza. 16 Specifies the size of the system disk in Gi. 17 Specifies the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 18 Specifies the number of vCPU sockets. 19 Specifies the number of vCPUs per socket. 6.4.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.4.4. Additional resources Overview of machine management . 6.5. Creating a Windows machine set on vSphere You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.5.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.17 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.17 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.5.2. Preparing your vSphere environment for Windows container workloads You must prepare your vSphere environment for Windows container workloads by creating the vSphere Windows VM golden image and enabling communication with the internal API server for the WMCO. 6.5.2.1. Creating the vSphere Windows VM golden image Create a vSphere Windows virtual machine (VM) golden image. Prerequisites You have created a private/public key pair, which is used to configure key-based authentication in the OpenSSH server. The private key must also be configured in the Windows Machine Config Operator (WMCO) namespace. This is required to allow the WMCO to communicate with the Windows VM. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. Note You must use Microsoft PowerShell commands in several cases when creating your Windows VM. PowerShell commands in this guide are distinguished by the PS C:\> prefix. Procedure Select a compatible Windows Server version. Currently, the Windows Machine Config Operator (WMCO) stable version supports Windows Server 2022 Long-Term Servicing Channel with the OS-level container networking patch KB5012637 . Create a new VM in the vSphere client using the VM golden image with a compatible Windows Server version. For more information about compatible versions, see the "Windows Machine Config Operator prerequisites" section of the "Red Hat OpenShift support for Windows Containers release notes." Important The virtual hardware version for your VM must meet the infrastructure requirements for OpenShift Container Platform. For more information, see the "VMware vSphere infrastructure requirements" section in the OpenShift Container Platform documentation. Also, you can refer to VMware's documentation on virtual machine hardware versions . Install and configure VMware Tools version 11.0.6 or greater on the Windows VM. See the VMware Tools documentation for more information. After installing VMware Tools on the Windows VM, verify the following: The C:\ProgramData\VMware\VMware Tools\tools.conf file exists with the following entry: exclude-nics= If the tools.conf file does not exist, create it with the exclude-nics option uncommented and set as an empty value. This entry ensures the cloned vNIC generated on the Windows VM by the hybrid-overlay is not ignored. The Windows VM has a valid IP address in vCenter: C:\> ipconfig The VMTools Windows service is running: PS C:\> Get-Service -Name VMTools | Select Status, StartType Install and configure the OpenSSH Server on the Windows VM. See Microsoft's documentation on installing OpenSSH for more details. Set up SSH access for an administrative user. See Microsoft's documentation on the Administrative user to do this. Important The public key used in the instructions must correspond to the private key you create later in the WMCO namespace that holds your secret. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. You must create a new firewall rule in the Windows VM that allows incoming connections for container logs. Run the following PowerShell command to create the firewall rule on TCP port 10250: PS C:\> New-NetFirewallRule -DisplayName "ContainerLogsPort" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow Clone the Windows VM so it is a reusable image. Follow the VMware documentation on how to clone an existing virtual machine for more details. In the cloned Windows VM, run the Windows Sysprep tool : C:\> C:\Windows\System32\Sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1 1 Specify the path to your unattend.xml file. Note There is a limit on how many times you can run the sysprep command on a Windows image. Consult Microsoft's documentation for more information. An example unattend.xml is provided, which maintains all the changes needed for the WMCO. You must modify this example; it cannot be used directly. Example 6.1. Example unattend.xml <?xml version="1.0" encoding="UTF-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="specialize"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Security-SPP-UX" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-SQMApi" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass="oobeSystem"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend> 1 Specify the ComputerName , which must follow the Kubernetes' names specification . These specifications also apply to Guest OS customization performed on the resulting template while creating new VMs. 2 Disable the automatic logon to avoid the security issue of leaving an open terminal with Administrator privileges at boot. This is the default value and must not be changed. 3 Replace the MyPassword placeholder with the password for the Administrator account. This prevents the built-in Administrator account from having a blank password by default. Follow Microsoft's best practices for choosing a password . After the Sysprep tool has completed, the Windows VM will power off. You must not use or power on this VM anymore. Convert the Windows VM to a template in vCenter . 6.5.2.1.1. Additional resources Configuring a secret for the Windows Machine Config Operator VMware vSphere infrastructure requirements 6.5.2.2. Enabling communication with the internal API server for the WMCO on vSphere The Windows Machine Config Operator (WMCO) downloads the Ignition config files from the internal API server endpoint. You must enable communication with the internal API server so that your Windows virtual machine (VM) can download the Ignition config files, and the kubelet on the configured VM can only communicate with the internal API server. Prerequisites You have installed a cluster on vSphere. Procedure Add a new DNS entry for api-int.<cluster_name>.<base_domain> that points to the external API server URL api.<cluster_name>.<base_domain> . This can be a CNAME or an additional A record. Note The external API endpoint was already created as part of the initial cluster installation on vSphere. 6.5.3. Sample YAML for a Windows MachineSet object on vSphere This sample YAML defines a Windows MachineSet object running on VMware vSphere that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows compute machine set name. The compute machine set name cannot be more than 9 characters long, due to the way machine names are generated in vSphere. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the size of the vSphere Virtual Machine Disk (VMDK). Note This parameter does not set the size of the Windows partition. You can resize the Windows partition by using the unattend.xml file or by creating the vSphere Windows virtual machine (VM) golden image with the required disk size. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other Linux compute machines reside in the cluster. 11 Specify the full path of the Windows vSphere VM template to use, such as golden-images/windows-server-template . The name must be unique. Important Do not specify the original VM template. The VM template must remain off and must be cloned for new Windows machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. 12 The windows-user-data is created by the WMCO when the first Windows machine is configured. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 13 Specify the vCenter data center to deploy the compute machine set on. 14 Specify the vCenter datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Optional: Specify the vSphere resource pool for your Windows VMs. 17 Specify the vCenter server IP or fully qualified domain name. 6.5.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. In disconnected environments, the image specified in the MachineSet custom resource (CR) must have the OpenSSH server v0.0.1.0 installed . Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.5.5. Additional resources Overview of machine management
[ "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2022*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2019*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: \"\" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: \"<zone>\" 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 9 categories: null cluster: 10 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 11 image: 12 name: <image_id> type: name kind: NutanixMachineProviderConfig 13 memorySize: 16Gi 14 project: type: \"\" subnets: 15 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 16 userDataSecret: name: windows-user-data 17 vcpuSockets: 4 18 vcpusPerSocket: 1 19", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "exclude-nics=", "C:\\> ipconfig", "PS C:\\> Get-Service -Name VMTools | Select Status, StartType", "PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow", "C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/creating-windows-machine-sets
Chapter 12. Prometheus and Grafana metrics under Red Hat Quay
Chapter 12. Prometheus and Grafana metrics under Red Hat Quay Red Hat Quay exports a Prometheus - and Grafana-compatible endpoint on each instance to allow for easy monitoring and alerting. 12.1. Exposing the Prometheus endpoint 12.1.1. Standalone Red Hat Quay When using podman run to start the Quay container, expose the metrics port 9091 : The metrics will now be available: USD curl quay.example.com:9091/metrics See Monitoring Quay with Prometheus and Grafana for details on configuring Prometheus and Grafana to monitor Quay repository counts. 12.1.2. Red Hat Quay Operator Determine the cluster IP for the quay-metrics service: USD oc get services -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h Connect to your cluster and access the metrics using the cluster IP and port for the quay-metrics service: USD oc debug node/master-0 sh-4.4# curl 172.30.37.76:9091/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 4.0447e-05 go_gc_duration_seconds{quantile="0.25"} 6.2203e-05 ... 12.1.3. Setting up Prometheus to consume metrics Prometheus needs a way to access all Red Hat Quay instances running in a cluster. In the typical setup, this is done by listing all the Red Hat Quay instances in a single named DNS entry, which is then given to Prometheus. 12.1.4. DNS configuration under Kubernetes A simple Kubernetes service can be configured to provide the DNS entry for Prometheus. 12.1.5. DNS configuration for a manual cluster SkyDNS is a simple solution for managing this DNS record when not using Kubernetes. SkyDNS can run on an etcd cluster. Entries for each Red Hat Quay instance in the cluster can be added and removed in the etcd store. SkyDNS will regularly read them from there and update the list of Quay instances in the DNS record accordingly. 12.2. Introduction to metrics Red Hat Quay provides metrics to help monitor the registry, including metrics for general registry usage, uploads, downloads, garbage collection, and authentication. 12.2.1. General registry statistics General registry statistics can indicate how large the registry has grown. Metric name Description quay_user_rows Number of users in the database quay_robot_rows Number of robot accounts in the database quay_org_rows Number of organizations in the database quay_repository_rows Number of repositories in the database quay_security_scanning_unscanned_images_remaining_total Number of images that are not scanned by the latest security scanner Sample metrics output # HELP quay_user_rows number of users in the database # TYPE quay_user_rows gauge quay_user_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 3 # HELP quay_robot_rows number of robot accounts in the database # TYPE quay_robot_rows gauge quay_robot_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2 # HELP quay_org_rows number of organizations in the database # TYPE quay_org_rows gauge quay_org_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2 # HELP quay_repository_rows number of repositories in the database # TYPE quay_repository_rows gauge quay_repository_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 4 # HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner # TYPE quay_security_scanning_unscanned_images_remaining gauge quay_security_scanning_unscanned_images_remaining{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 5 12.2.2. Queue items The queue items metrics provide information on the multiple queues used by Quay for managing work. Metric name Description quay_queue_items_available Number of items in a specific queue quay_queue_items_locked Number of items that are running quay_queue_items_available_unlocked Number of items that are waiting to be processed Metric labels queue_name: The name of the queue. One of: exportactionlogs: Queued requests to export action logs. These logs are then processed and put in storage. A link is then sent to the requester via email. namespacegc: Queued namespaces to be garbage collected notification: Queue for repository notifications to be sent out repositorygc: Queued repositories to be garbage collected secscanv4: Notification queue specific for Clair V4 dockerfilebuild: Queue for Quay docker builds imagestoragereplication: Queued blob to be replicated across multiple storages chunk_cleanup: Queued blob segments that needs to be deleted. This is only used by some storage implementations, for example, Swift. For example, the queue labelled repositorygc contains the repositories marked for deletion by the repository garbage collection worker. For metrics with a queue_name label of repositorygc : quay_queue_items_locked is the number of repositories currently being deleted. quay_queue_items_available_unlocked is the number of repositories waiting to get processed by the worker. Sample metrics output # HELP quay_queue_items_available number of queue items that have not expired # TYPE quay_queue_items_available gauge quay_queue_items_available{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 ... # HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked # TYPE quay_queue_items_available_unlocked gauge quay_queue_items_available_unlocked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 ... # HELP quay_queue_items_locked number of queue items that have been acquired # TYPE quay_queue_items_locked gauge quay_queue_items_locked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 12.2.3. Garbage collection metrics These metrics show you how many resources have been removed from garbage collection (gc). They show many times the gc workers have run and how many namespaces, repositories, and blobs were removed. Metric name Description quay_gc_iterations_total Number of iterations by the GCWorker quay_gc_namespaces_purged_total Number of namespaces purged by the NamespaceGCWorker quay_gc_repos_purged_total Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker quay_gc_storage_blobs_deleted_total Number of storage blobs deleted Sample metrics output # TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... 12.2.3.1. Multipart uploads metrics The multipart uploads metrics show the number of blobs uploads to storage (S3, Rados, GoogleCloudStorage, RHOCS). These can help identify issues when Quay is unable to correctly upload blobs to storage. Metric name Description quay_multipart_uploads_started_total Number of multipart uploads to Quay storage that started quay_multipart_uploads_completed_total Number of multipart uploads to Quay storage that completed Sample metrics output # TYPE quay_multipart_uploads_completed_created gauge quay_multipart_uploads_completed_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284895e+09 ... # HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed # TYPE quay_multipart_uploads_completed_total counter quay_multipart_uploads_completed_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 # TYPE quay_multipart_uploads_started_created gauge quay_multipart_uploads_started_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284352e+09 ... # HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started # TYPE quay_multipart_uploads_started_total counter quay_multipart_uploads_started_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... 12.2.4. Image push / pull metrics A number of metrics are available related to pushing and pulling images. 12.2.4.1. Image pulls total Metric name Description quay_registry_image_pulls_total The number of images downloaded from the registry. Metric labels protocol: the registry protocol used (should always be v2) ref: ref used to pull - tag, manifest status: http return code of the request 12.2.4.2. Image bytes pulled Metric name Description quay_registry_image_pulled_estimated_bytes_total The number of bytes downloaded from the registry Metric labels protocol: the registry protocol used (should always be v2) 12.2.4.3. Image pushes total Metric name Description quay_registry_image_pushes_total The number of images uploaded from the registry. Metric labels protocol: the registry protocol used (should always be v2) pstatus: http return code of the request pmedia_type: the uploaded manifest type 12.2.4.4. Image bytes pushed Metric name Description quay_registry_image_pushed_bytes_total The number of bytes uploaded to the registry Sample metrics output # HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry # TYPE quay_registry_image_pushed_bytes_total counter quay_registry_image_pushed_bytes_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application"} 0 ... 12.2.5. Authentication metrics The authentication metrics provide the number of authentication requests, labeled by type and whether it succeeded or not. For example, this metric could be used to monitor failed basic authentication requests. Metric name Description quay_authentication_attempts_total Number of authentication attempts across the registry and API Metric labels auth_kind: The type of auth used, including: basic oauth credentials success: true or false Sample metrics output # TYPE quay_authentication_attempts_created gauge quay_authentication_attempts_created{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 1.6317843039374158e+09 ... # HELP quay_authentication_attempts_total number of authentication attempts across the registry and API # TYPE quay_authentication_attempts_total counter quay_authentication_attempts_total{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 2 ...
[ "sudo podman run -d --rm -p 80:8080 -p 443:8443 -p 9091:9091 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9", "curl quay.example.com:9091/metrics", "oc get services -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h", "oc debug node/master-0 sh-4.4# curl 172.30.37.76:9091/metrics HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile=\"0\"} 4.0447e-05 go_gc_duration_seconds{quantile=\"0.25\"} 6.2203e-05", "HELP quay_user_rows number of users in the database TYPE quay_user_rows gauge quay_user_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 3 HELP quay_robot_rows number of robot accounts in the database TYPE quay_robot_rows gauge quay_robot_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 2 HELP quay_org_rows number of organizations in the database TYPE quay_org_rows gauge quay_org_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 2 HELP quay_repository_rows number of repositories in the database TYPE quay_repository_rows gauge quay_repository_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 4 HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner TYPE quay_security_scanning_unscanned_images_remaining gauge quay_security_scanning_unscanned_images_remaining{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 5", "HELP quay_queue_items_available number of queue items that have not expired TYPE quay_queue_items_available gauge quay_queue_items_available{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0 HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked TYPE quay_queue_items_available_unlocked gauge quay_queue_items_available_unlocked{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0 HELP quay_queue_items_locked number of queue items that have been acquired TYPE quay_queue_items_locked gauge quay_queue_items_locked{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0", "TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "TYPE quay_multipart_uploads_completed_created gauge quay_multipart_uploads_completed_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823308284895e+09 HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed TYPE quay_multipart_uploads_completed_total counter quay_multipart_uploads_completed_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_multipart_uploads_started_created gauge quay_multipart_uploads_started_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823308284352e+09 HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started TYPE quay_multipart_uploads_started_total counter quay_multipart_uploads_started_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry TYPE quay_registry_image_pushed_bytes_total counter quay_registry_image_pushed_bytes_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\"} 0", "TYPE quay_authentication_attempts_created gauge quay_authentication_attempts_created{auth_kind=\"basic\",host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\",success=\"True\"} 1.6317843039374158e+09 HELP quay_authentication_attempts_total number of authentication attempts across the registry and API TYPE quay_authentication_attempts_total counter quay_authentication_attempts_total{auth_kind=\"basic\",host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\",success=\"True\"} 2" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/prometheus-metrics-under-quay-enterprise
Appendix B. Using Red Hat Enterprise Linux packages
Appendix B. Using Red Hat Enterprise Linux packages This section describes how to use software delivered as RPM packages for Red Hat Enterprise Linux. To ensure the RPM packages for this product are available, you must first register your system . B.1. Overview A component such as a library or server often has multiple packages associated with it. You do not have to install them all. You can install only the ones you need. The primary package typically has the simplest name, without additional qualifiers. This package provides all the required interfaces for using the component at program run time. Packages with names ending in -devel contain headers for C and C++ libraries. These are required at compile time to build programs that depend on this package. Packages with names ending in -docs contain documentation and example programs for the component. For more information about using RPM packages, see one of the following resources: Red Hat Enterprise Linux 7 - Installing and managing software Red Hat Enterprise Linux 8 - Managing software packages B.2. Searching for packages To search for packages, use the yum search command. The search results include package names, which you can use as the value for <package> in the other commands listed in this section. USD yum search <keyword>... B.3. Installing packages To install packages, use the yum install command. USD sudo yum install <package>... B.4. Querying package information To list the packages installed in your system, use the rpm -qa command. USD rpm -qa To get information about a particular package, use the rpm -qi command. USD rpm -qi <package> To list all the files associated with a package, use the rpm -ql command. USD rpm -ql <package>
[ "yum search <keyword>", "sudo yum install <package>", "rpm -qa", "rpm -qi <package>", "rpm -ql <package>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_ruby_client/using_red_hat_enterprise_linux_packages
Chapter 7. LVM Administration with the LVM GUI
Chapter 7. LVM Administration with the LVM GUI In addition to the Command Line Interface (CLI), LVM provides a Graphical User Interface (GUI) which you can use to configure LVM logical volumes. You can bring up this utility by typing system-config-lvm . The LVM chapter of the Red Hat Enterprise Linux Deployment Guide provides step-by-step instructions for configuring an LVM logical volume using this utility. In addition, the LVM GUI is availalbe as part of the Conga management interface. For information on using the LVM GUI with Conga, see the online help for Conga.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/LVM_GUI
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/authorization_of_web_endpoints/making-open-source-more-inclusive
4.4. Adding users and groups to an Image Builder blueprint in the web console interface
4.4. Adding users and groups to an Image Builder blueprint in the web console interface Adding customizations such as users and groups to blueprints in the web console interface is currently not possible. To work around this limitation, use the Terminal tab in web console to use the command-line interface (CLI) workflow. Prerequisites A blueprint must exist. A CLI text editor such as vim , nano , or emacs must be installed. To install them: Procedure 1. Find out the name of the blueprint: Open the Image Builder ( Image builder ) tab on the left in the RHEL 7 web console to see the name of the blueprint. 2. Navigate to the CLI in web console: Open the system administration tab on the left, then select the last item Terminal from the list on the left. 3. Enter the super-user (root) mode: Provide your credentials when asked. Note that the terminal does not reuse your credentials you entered when logging into the web console. A new shell with root privileges starts in your home directory. 4. Export the blueprint to a file: 5. Edit the file BLUEPRINT-NAME .toml with a CLI text editor of your choice and add the users and groups. Important RHEL 7 web console does not have any built-in feature to edit text files on the system, so the use of a CLI text editor is required for this step. i. For every user to be added, add this block to the file: Replace PASSWORD-HASH with the actual password hash. To generate the hash, use a command such as this: Replace ssh-rsa (...) key-name with the actual public key. Replace the other placeholders with suitable values. Leave out any of the lines as needed, only the user name is required. ii. For every user group to be added, add this block to the file: iii. Increase the version number. iv. Save the file and close the editor. 6. Import the blueprint back into Image Builder: Note that you must supply the file name including the .toml extension, while in other commands you use only the name of the blueprint. 7. To verify that the contents uploaded to Image Builder match your edits, list the contents of blueprint: Check if the version matches what you put in the file and if your customizations are present. Important The Image Builder plug-in for RHEL 7 web console does not show any information that could be used to verify that the changes have been applied, unless you edited also the packages included in the blueprint. 8. Exit the privileged shell: 9. Open the Image Builder (Image builder) tab on the left and refresh the page, in all browsers and all tabs where it was opened. This prevents state cached in the loaded page from accidentally reverting your changes. Additional information Section 3.6, " Image Builder blueprint format " Section 3.3, " Editing an Image Builder blueprint with command-line interface "
[ "yum install editor-name", "sudo bash", "composer-cli blueprints save BLUEPRINT-NAME", "[[customization.user]] name = \" USER-NAME \" description = \" USER-DESCRIPTION \" password = \" PASSWORD-HASH \" key = \" ssh-rsa (...) key-name \" home = \"/home/ USER-NAME /\" shell = \" /usr/bin/bash \" groups = [ \"users\", \"wheel\" ] uid = NUMBER gid = NUMBER", "python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "[[customizations.group]] name = \" GROUP-NAME \" gid = NUMBER", "composer-cli blueprints push BLUEPRINT-NAME.toml", "composer-cli blueprints show BLUEPRINT-NAME", "exit" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_4
probe::vm.munmap
probe::vm.munmap Name probe::vm.munmap - Fires when an munmap is requested Synopsis vm.munmap Values length the length of the memory segment address the requested address name name of the probe point Context The process calling munmap.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-munmap
Chapter 15. Managing security context constraints
Chapter 15. Managing security context constraints In OpenShift Container Platform, you can use security context constraints (SCCs) to control permissions for the pods in your cluster. Default SCCs are created during installation and when you install some Operators or other components. As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI ( oc ). Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . 15.1. About security context constraints Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. Security context constraints allow an administrator to control: Whether a pod can run privileged containers with the allowPrivilegedContainer flag Whether a pod is constrained with the allowPrivilegeEscalation flag The capabilities that a container can request The use of host directories as volumes The SELinux context of the container The container user ID The use of host namespaces and networking The allocation of an FSGroup that owns the pod volumes The configuration of allowable supplemental groups Whether a container requires write access to its root file system The usage of volume types The configuration of allowable seccomp profiles Important Do not set the openshift.io/run-level label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged. 15.1.1. Default security context constraints The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform. Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . Table 15.1. Default security context constraints Security context constraint Description anyuid Provides all features of the restricted SCC, but allows users to run with any UID and any GID. hostaccess Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution. hostmount-anyuid Provides all the features of the restricted SCC, but allows host mounts and running as any UID and any GID on the system. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. hostnetwork Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning If additional workloads are run on control plane hosts, use caution when providing access to hostnetwork . A workload that runs hostnetwork on a control plane host is effectively root on the cluster and must be trusted accordingly. hostnetwork-v2 Like the hostnetwork SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. node-exporter Used for the Prometheus node exporter. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. nonroot Provides all features of the restricted SCC, but allows users to run with any non-root UID. The user must specify the UID or it must be specified in the manifest of the container runtime. nonroot-v2 Like the nonroot SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. privileged Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context. Warning This is the most relaxed SCC and should be used only for cluster administration. Grant with caution. The privileged SCC allows: Users to run privileged pods Pods to mount host directories as volumes Pods to run as any user Pods to run with any MCS label Pods to use the host's IPC namespace Pods to use the host's PID namespace Pods to use any FSGroup Pods to use any supplemental group Pods to use any seccomp profiles Pods to request any capabilities Note Setting privileged: true in the pod specification does not necessarily select the privileged SCC. The SCC that has allowPrivilegedContainer: true and has the highest prioritization will be chosen if the user has the permissions to use it. restricted Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. The restricted SCC: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs Requires that a pod is run with a pre-allocated MCS label Requires that a pod is run with a preallocated FSGroup Allows pods to use any supplemental group In clusters that were upgraded from OpenShift Container Platform 4.10 or earlier, this SCC is available for use by any authenticated user. The restricted SCC is no longer available to users of new OpenShift Container Platform 4.11 or later installations, unless the access is explicitly granted. restricted-v2 Like the restricted SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users. Note The restricted-v2 SCC is the most restrictive of the SCCs that is included by default with the system. However, you can create a custom SCC that is even more restrictive. For example, you can create an SCC that restricts readOnlyRootFilesystem to true . 15.1.2. Security context constraints settings Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories: Category Description Controlled by a boolean Fields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified. Controlled by an allowable set Fields of this type are checked against the set to ensure their value is allowed. Controlled by a strategy Items that have a strategy to generate a value provide: A mechanism to generate the value, and A mechanism to ensure that a specified value falls into the set of allowable values. CRI-O has the following default list of capabilities that are allowed for each container of a pod: CHOWN DAC_OVERRIDE FSETID FOWNER SETGID SETUID SETPCAP NET_BIND_SERVICE KILL The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities , defaultAddCapabilities , and requiredDropCapabilities parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container. Note You can drop all capabilites from containers by setting the requiredDropCapabilities parameter to ALL . This is what the restricted-v2 SCC does. 15.1.3. Security context constraints strategies RunAsUser MustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser . Example MustRunAs snippet ... runAsUser: type: MustRunAs uid: <id> ... MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range. Example MustRunAsRange snippet ... runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue> ... MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided. Example MustRunAsNonRoot snippet ... runAsUser: type: MustRunAsNonRoot ... RunAsAny - No default provided. Allows any runAsUser to be specified. Example RunAsAny snippet ... runAsUser: type: RunAsAny ... SELinuxContext MustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions . RunAsAny - No default provided. Allows any seLinuxOptions to be specified. SupplementalGroups MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges. RunAsAny - No default provided. Allows any supplementalGroups to be specified. FSGroup MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range. RunAsAny - No default provided. Allows any fsGroup ID to be specified. 15.1.4. Controlling volumes The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume: awsElasticBlockStore azureDisk azureFile cephFS cinder configMap csi downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk ephemeral gitRepo glusterfs hostPath iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageos vsphereVolume * (A special value to allow the use of all volume types.) none (A special value to disallow the use of all volumes types. Exists only for backwards compatibility.) The recommended minimum set of allowed volumes for new SCCs are configMap , downwardAPI , emptyDir , persistentVolumeClaim , secret , and projected . Note This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform. Note For backwards compatibility, the usage of allowHostDirVolumePlugin overrides settings in the volumes field. For example, if allowHostDirVolumePlugin is set to false but allowed in the volumes field, then the hostPath value will be removed from volumes . 15.1.5. Admission control Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user. In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod. The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account. Note When you create a workload resource, such as deployment, only the service account is used to find the SCCs and admit the pods when they are created. Admission uses the following approach to create the final security context for the pod: Retrieve all SCCs available for use. Generate field values for security context settings that were not specified on the request. Validate the final settings against the available constraints. If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected. A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated: Note These examples are in the context of a strategy using the pre-allocated values. An FSGroup SCC strategy of MustRunAs If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.fsGroup , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. A SupplementalGroups SCC strategy of MustRunAs If the pod specification defines one or more supplementalGroups IDs, then the pod's IDs must equal one of the IDs in the namespace's openshift.io/sa.scc.supplemental-groups annotation. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.supplementalGroups field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.supplementalGroups , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. 15.1.6. Security context constraints prioritization Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller. A priority value of 0 is the lowest possible priority. A nil priority is considered a 0 , or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting. When the complete set of available SCCs is determined, the SCCs are ordered in the following manner: The highest priority SCCs are ordered first. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive. If both the priorities and restrictions are equal, the SCCs are sorted by name. By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser in the pod's SecurityContext . 15.2. About pre-allocated security context constraints values The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod. The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification: A RunAsUser strategy of MustRunAsRange with no minimum or maximum set. Admission looks for the openshift.io/sa.scc.uid-range annotation to populate range fields. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the openshift.io/sa.scc.mcs annotation to populate the level. A FSGroup strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. A SupplementalGroups strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy: RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification. MustRunAs (single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace's default parameter value also appears in the pod's groups. MustRunAsRange and MustRunAs (range-based) strategies provide the minimum value of the range. As with a single value MustRunAs strategy, the namespace's default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range. Note FSGroup and SupplementalGroups strategies fall back to the openshift.io/sa.scc.uid-range annotation if the openshift.io/sa.scc.supplemental-groups annotation does not exist on the namespace. If neither exists, the SCC is not created. Note By default, the annotation-based FSGroup strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3 , the FSGroup strategy configures itself with a minimum and maximum value of 1 . If you want to allow more groups to be accepted for the FSGroup field, you can configure a custom SCC that does not use the annotation. Note The openshift.io/sa.scc.supplemental-groups annotation accepts a comma-delimited list of blocks in the format of <start>/<length or <start>-<end> . The openshift.io/sa.scc.uid-range annotation accepts only a single block. 15.3. Example security context constraints The following examples show the security context constraints (SCC) format and annotations: Annotated privileged SCC allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*' 1 A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities. 2 A list of additional capabilities that are added to any pod. 3 The FSGroup strategy, which dictates the allowable values for the security context. 4 The groups that can access this SCC. 5 A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities. 6 The runAsUser strategy type, which dictates the allowable values for the security context. 7 The seLinuxContext strategy type, which dictates the allowable values for the security context. 8 The supplementalGroups strategy, which dictates the allowable supplemental groups for the security context. 9 The users who can access this SCC. 10 The allowable volume types for the security context. In the example, * allows the use of all volume types. The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2 SCC. Without explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because the restricted-v2 SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted-v2 SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plugin will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges. With explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request. This configuration is valid for SELinux, fsGroup, and Supplemental Groups. 15.4. Creating security context constraints If the default security context constraints (SCCs) do not satisfy your application workload requirements, you can create a custom SCC by using the OpenShift CLI ( oc ). Important Creating and modifying your own SCCs are advanced operations that might cause instability to your cluster. If you have questions about using your own SCCs, contact Red Hat Support. For information about contacting Red Hat support, see Getting support . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with the cluster-admin role. Procedure Define the SCC in a YAML file named scc-admin.yaml : kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group Optionally, you can drop specific capabilities for an SCC by setting the requiredDropCapabilities field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specify ALL . For example, to create an SCC that drops the KILL , MKNOD , and SYS_CHROOT capabilities, add the following to the SCC object: requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT Note You cannot list a capability in both allowedCapabilities and requiredDropCapabilities . CRI-O supports the same list of capability values that are found in the Docker documentation . Create the SCC by passing in the file: USD oc create -f scc-admin.yaml Example output securitycontextconstraints "scc-admin" created Verification Verify that the SCC was created: USD oc get scc scc-admin Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere] 15.5. Configuring a workload to require a specific SCC You can configure a workload to require a certain security context constraint (SCC). This is useful in scenarios where you want to pin a specific SCC to the workload or if you want to prevent your required SCC from being preempted by another SCC in the cluster. To require a specific SCC, set the openshift.io/required-scc annotation on your workload. You can set this annotation on any resource that can set a pod manifest template, such as a deployment or daemon set. The SCC must exist in the cluster and must be applicable to the workload, otherwise pod admission fails. An SCC is considered applicable to the workload if the user creating the pod or the pod's service account has use permissions for the SCC in the pod's namespace. Warning Do not change the openshift.io/required-scc annotation in the live pod's manifest, because doing so causes the pod admission to fail. To change the required SCC, update the annotation in the underlying pod template, which causes the pod to be deleted and re-created. Prerequisites The SCC must exist in the cluster. Procedure Create a YAML file for the deployment and specify a required SCC by setting the openshift.io/required-scc annotation: Example deployment.yaml apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: # ... template: metadata: annotations: openshift.io/required-scc: "my-scc" 1 # ... 1 Specify the name of the SCC to require. Create the resource by running the following command: USD oc create -f deployment.yaml Verification Verify that the deployment used the specified SCC: View the value of the pod's openshift.io/scc annotation by running the following command: USD oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\.io\/scc}{"\n"}' 1 1 Replace <pod_name> with the name of your deployment pod. Examine the output and confirm that the displayed SCC matches the SCC that you defined in the deployment: Example output my-scc 15.6. Role-based access to security context constraints You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. To include access to SCCs for your role, specify the scc resource when creating a role. USD oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace> This results in the following role definition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ... name: role-name 1 namespace: namespace 2 ... rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use 1 The role's name. 2 Namespace of the defined role. Defaults to default if not specified. 3 The API group that includes the SecurityContextConstraints resource. Automatically defined when scc is specified as a resource. 4 An example name for an SCC you want to have access. 5 Name of the resource group that allows users to specify SCC names in the resourceNames field. 6 A list of verbs to apply to the role. A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name . Note Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use on SCC resources, including the restricted-v2 SCC. 15.7. Reference of security context constraints commands You can manage security context constraints (SCCs) in your instance as normal API objects by using the OpenShift CLI ( oc ). Note You must have cluster-admin privileges to manage SCCs. 15.7.1. Listing security context constraints To get a current list of SCCs: USD oc get scc Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","persistentVolumeClaim","projected","secret"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","nfs","persistentVolumeClaim","projected","secret"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostnetwork-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] nonroot-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] privileged true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] restricted-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 15.7.2. Examining security context constraints You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to. For example, to examine the restricted SCC: USD oc describe scc restricted Example output Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> 1 Lists which users and service accounts the SCC is applied to. 2 Lists which groups the SCC is applied to. Note To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.3. Updating security context constraints If your custom SCC no longer satisfies your application workloads requirements, you can update your SCC by using the OpenShift CLI ( oc ). To update an existing SCC: USD oc edit scc <scc_name> Important To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.4. Deleting security context constraints If you no longer require your custom SCC, you can delete the SCC by using the OpenShift CLI ( oc ). To delete an SCC: USD oc delete scc <scc_name> Important Do not delete default SCCs. If you delete a default SCC, it is regenerated by the Cluster Version Operator. 15.8. Additional resources Getting support
[ "runAsUser: type: MustRunAs uid: <id>", "runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>", "runAsUser: type: MustRunAsNonRoot", "runAsUser: type: RunAsAny", "allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0", "kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group", "requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT", "oc create -f scc-admin.yaml", "securitycontextconstraints \"scc-admin\" created", "oc get scc scc-admin", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]", "apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: template: metadata: annotations: openshift.io/required-scc: \"my-scc\" 1", "oc create -f deployment.yaml", "oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\\.io\\/scc}{\"\\n\"}' 1", "my-scc", "oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use", "oc get scc", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc describe scc restricted", "Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>", "oc edit scc <scc_name>", "oc delete scc <scc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/managing-pod-security-policies
Chapter 8. Traffic splitting
Chapter 8. Traffic splitting 8.1. Traffic splitting overview In a Knative application, traffic can be managed by creating a traffic split. A traffic split is configured as part of a route, which is managed by a Knative service. Configuring a route allows requests to be sent to different revisions of a service. This routing is determined by the traffic spec of the Service object. A traffic spec declaration consists of one or more revisions, each responsible for handling a portion of the overall traffic. The percentages of traffic routed to each revision must add up to 100%, which is ensured by a Knative validation. The revisions specified in a traffic spec can either be a fixed, named revision, or can point to the "latest" revision, which tracks the head of the list of all revisions for the service. The "latest" revision is a type of floating reference that updates if a new revision is created. Each revision can have a tag attached that creates an additional access URL for that revision. The traffic spec can be modified by: Editing the YAML of a Service object directly. Using the Knative ( kn ) CLI --traffic flag. Using the OpenShift Container Platform web console. When you create a Knative service, it does not have any default traffic spec settings. 8.2. Traffic spec examples The following example shows a traffic spec where 100% of traffic is routed to the latest revision of the service. Under status , you can see the name of the latest revision that latestRevision resolves to: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - latestRevision: true percent: 100 status: ... traffic: - percent: 100 revisionName: example-service The following example shows a traffic spec where 100% of traffic is routed to the revision tagged as current , and the name of that revision is specified as example-service . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0 The following example shows how the list of revisions in the traffic spec can be extended so that traffic is split between multiple revisions. This example sends 50% of traffic to the revision tagged as current , and 50% of traffic to the revision tagged as candidate . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0 8.3. Traffic splitting using the Knative CLI Using the Knative ( kn ) CLI to create traffic splits provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service update command to split traffic between revisions of a service. 8.3.1. Creating a traffic split by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a Knative service. Procedure Specify the revision of your service and what percentage of traffic you want to route to it by using the --traffic tag with a standard kn service update command: Example command USD kn service update <service_name> --traffic <revision>=<percentage> Where: <service_name> is the name of the Knative service that you are configuring traffic routing for. <revision> is the revision that you want to configure to receive a percentage of traffic. You can either specify the name of the revision, or a tag that you assigned to the revision by using the --tag flag. <percentage> is the percentage of traffic that you want to send to the specified revision. Optional: The --traffic flag can be specified multiple times in one command. For example, if you have a revision tagged as @latest and a revision named stable , you can specify the percentage of traffic that you want to split to each revision as follows: Example command USD kn service update showcase --traffic @latest=20,stable=80 If you have multiple revisions and do not specify the percentage of traffic that should be split to the last revision, the --traffic flag can calculate this automatically. For example, if you have a third revision named example , and you use the following command: Example command USD kn service update showcase --traffic @latest=10,stable=60 The remaining 30% of traffic is split to the example revision, even though it was not specified. 8.4. CLI flags for traffic splitting The Knative ( kn ) CLI supports traffic operations on the traffic block of a service as part of the kn service update command. 8.4.1. Knative CLI traffic splitting flags The following table displays a summary of traffic splitting flags, value formats, and the operation the flag performs. The Repetition column denotes whether repeating the particular value of flag is allowed in a kn service update command. Flag Value(s) Operation Repetition --traffic RevisionName=Percent Gives Percent traffic to RevisionName Yes --traffic Tag=Percent Gives Percent traffic to the revision having Tag Yes --traffic @latest=Percent Gives Percent traffic to the latest ready revision No --tag RevisionName=Tag Gives Tag to RevisionName Yes --tag @latest=Tag Gives Tag to the latest ready revision No --untag Tag Removes Tag from revision Yes 8.4.1.1. Multiple flags and order precedence All traffic-related flags can be specified using a single kn service update command. kn defines the precedence of these flags. The order of the flags specified when using the command is not taken into account. The precedence of the flags as they are evaluated by kn are: --untag : All the referenced revisions with this flag are removed from the traffic block. --tag : Revisions are tagged as specified in the traffic block. --traffic : The referenced revisions are assigned a portion of the traffic split. You can add tags to revisions and then split traffic according to the tags you have set. 8.4.1.2. Custom URLs for revisions Assigning a --tag flag to a service by using the kn service update command creates a custom URL for the revision that is created when you update the service. The custom URL follows the pattern https://<tag>-<service_name>-<namespace>.<domain> or http://<tag>-<service_name>-<namespace>.<domain> . The --tag and --untag flags use the following syntax: Require one value. Denote a unique tag in the traffic block of the service. Can be specified multiple times in one command. 8.4.1.2.1. Example: Assign a tag to a revision The following example assigns the tag latest to a revision named example-revision : USD kn service update <service_name> --tag @latest=example-tag 8.4.1.2.2. Example: Remove a tag from a revision You can remove a tag to remove the custom URL, by using the --untag flag. Note If a revision has its tags removed, and it is assigned 0% of the traffic, the revision is removed from the traffic block entirely. The following command removes all tags from the revision named example-revision : USD kn service update <service_name> --untag example-tag 8.5. Splitting traffic between revisions After you create a serverless application, the application is displayed in the Topology view of the Developer perspective in the OpenShift Container Platform web console. The application revision is represented by the node, and the Knative service is indicated by a quadrilateral around the node. Any new change in the code or the service configuration creates a new revision, which is a snapshot of the code at a given time. For a service, you can manage the traffic between the revisions of the service by splitting and routing it to the different revisions as required. 8.5.1. Managing traffic between revisions by using the OpenShift Container Platform web console Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have logged in to the OpenShift Container Platform web console. Procedure To split traffic between multiple revisions of an application in the Topology view: Click the Knative service to see its overview in the side panel. Click the Resources tab, to see a list of Revisions and Routes for the service. Figure 8.1. Serverless application Click the service, indicated by the S icon at the top of the side panel, to see an overview of the service details. Click the YAML tab and modify the service configuration in the YAML editor, and click Save . For example, change the timeoutseconds from 300 to 301 . This change in the configuration triggers a new revision. In the Topology view, the latest revision is displayed and the Resources tab for the service now displays the two revisions. In the Resources tab, click Set Traffic Distribution to see the traffic distribution dialog box: Add the split traffic percentage portion for the two revisions in the Splits field. Add tags to create custom URLs for the two revisions. Click Save to see two nodes representing the two revisions in the Topology view. Figure 8.2. Serverless application revisions 8.6. Rerouting traffic using blue-green strategy You can safely reroute traffic from a production version of an app to a new version, by using a blue-green deployment strategy . 8.6.1. Routing and managing traffic by using a blue-green deployment strategy Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. Install the OpenShift CLI ( oc ). Procedure Create and deploy an app as a Knative service. Find the name of the first revision that was created when you deployed the service, by viewing the output from the following command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' Example command USD oc get ksvc showcase -o=jsonpath='{.status.latestCreatedRevisionName}' Example output USD showcase-00001 Add the following YAML to the service spec to send inbound traffic to the revision: ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision ... Verify that you can view your app at the URL output you get from running the following command: USD oc get ksvc <service_name> Deploy a second revision of your app by modifying at least one field in the template spec of the service and redeploying it. For example, you can modify the image of the service, or an env environment variable. You can redeploy the service by applying the service YAML file, or by using the kn service update command if you have installed the Knative ( kn ) CLI. Find the name of the second, latest revision that was created when you redeployed the service, by running the command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' At this point, both the first and second revisions of the service are deployed and running. Update your existing service to create a new, test endpoint for the second revision, while still sending all other traffic to the first revision: Example of updated service spec with test endpoint ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route ... After you redeploy this service by reapplying the YAML resource, the second revision of the app is now staged. No traffic is routed to the second revision at the main URL, and Knative creates a new service named v2 for testing the newly deployed revision. Get the URL of the new service for the second revision, by running the following command: USD oc get ksvc <service_name> --output jsonpath="{.status.traffic[*].url}" You can use this URL to validate that the new version of the app is behaving as expected before you route any traffic to it. Update your existing service again, so that 50% of traffic is sent to the first revision, and 50% is sent to the second revision: Example of updated service spec splitting traffic 50/50 between revisions ... spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2 ... When you are ready to route all traffic to the new version of the app, update the service again to send 100% of traffic to the second revision: Example of updated service spec sending all traffic to the second revision ... spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2 ... Tip You can remove the first revision instead of setting it to 0% of traffic if you do not plan to roll back the revision. Non-routeable revision objects are then garbage-collected. Visit the URL of the first revision to verify that no more traffic is being sent to the old version of the app.
[ "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - latestRevision: true percent: 100 status: traffic: - percent: 100 revisionName: example-service", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0", "kn service update <service_name> --traffic <revision>=<percentage>", "kn service update showcase --traffic @latest=20,stable=80", "kn service update showcase --traffic @latest=10,stable=60", "kn service update <service_name> --tag @latest=example-tag", "kn service update <service_name> --untag example-tag", "oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'", "oc get ksvc showcase -o=jsonpath='{.status.latestCreatedRevisionName}'", "showcase-00001", "spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision", "oc get ksvc <service_name>", "oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'", "spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route", "oc get ksvc <service_name> --output jsonpath=\"{.status.traffic[*].url}\"", "spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2", "spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serving/traffic-splitting
Appendix A. Versioning information
Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/versioning-information
Chapter 6. Creating Windows machine sets
Chapter 6. Creating Windows machine sets 6.1. Creating a Windows machine set on AWS You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. Use one of the following aws commands, as appropriate for your Windows Server release, to query valid AMI images: Example Windows Server 2022 command USD aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2022*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table Example Windows Server 2019 command USD aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2019*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table where: <aws_region_name> Specifies the name of your AWS region. For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 6.1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.1.2. Sample YAML for a Windows MachineSet object on AWS This sample YAML defines a Windows MachineSet object running on Amazon Web Services (AWS) that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api 1 3 5 10 13 14 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the AMI ID of a supported Windows image with a container runtime installed. Note For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 11 Specify the AWS zone, like us-east-1a . 12 Specify the AWS region, like us-east-1 . 16 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 6.1.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.1.4. Additional resources Overview of machine management 6.2. Creating a Windows machine set on Azure You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.2.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.2.2. Sample YAML for a Windows MachineSet object on Azure This sample YAML defines a Windows MachineSet object running on Microsoft Azure that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: "" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: "<zone>" 16 1 3 5 11 12 13 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows compute machine set name. Windows machine names on Azure cannot be more than 15 characters long. Therefore, the compute machine set name cannot be more than 9 characters long, due to the way machine names are generated from it. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify a WindowsServer image offering that defines the 2019-Datacenter-with-Containers SKU. 10 Specify the Azure region, like centralus . 14 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 16 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 6.2.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.2.4. Additional resources Overview of machine management 6.3. Creating a Windows machine set on vSphere You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.3.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.3.2. Preparing your vSphere environment for Windows container workloads You must prepare your vSphere environment for Windows container workloads by creating the vSphere Windows VM golden image and enabling communication with the internal API server for the WMCO. 6.3.2.1. Creating the vSphere Windows VM golden image Create a vSphere Windows virtual machine (VM) golden image. Prerequisites You have created a private/public key pair, which is used to configure key-based authentication in the OpenSSH server. The private key must also be configured in the Windows Machine Config Operator (WMCO) namespace. This is required to allow the WMCO to communicate with the Windows VM. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. Note You must use Microsoft PowerShell commands in several cases when creating your Windows VM. PowerShell commands in this guide are distinguished by the PS C:\> prefix. Procedure Select a compatible Windows Server version. Currently, the Windows Machine Config Operator (WMCO) stable version supports Windows Server 2022 Long-Term Servicing Channel with the OS-level container networking patch KB5012637 . Create a new VM in the vSphere client using the VM golden image with a compatible Windows Server version. For more information about compatible versions, see the "Windows Machine Config Operator prerequisites" section of the "Red Hat OpenShift support for Windows Containers release notes." Important The virtual hardware version for your VM must meet the infrastructure requirements for OpenShift Container Platform. For more information, see the "VMware vSphere infrastructure requirements" section in the OpenShift Container Platform documentation. Also, you can refer to VMware's documentation on virtual machine hardware versions . Install and configure VMware Tools version 11.0.6 or greater on the Windows VM. See the VMware Tools documentation for more information. After installing VMware Tools on the Windows VM, verify the following: The C:\ProgramData\VMware\VMware Tools\tools.conf file exists with the following entry: exclude-nics= If the tools.conf file does not exist, create it with the exclude-nics option uncommented and set as an empty value. This entry ensures the cloned vNIC generated on the Windows VM by the hybrid-overlay is not ignored. The Windows VM has a valid IP address in vCenter: C:\> ipconfig The VMTools Windows service is running: PS C:\> Get-Service -Name VMTools | Select Status, StartType Install and configure the OpenSSH Server on the Windows VM. See Microsoft's documentation on installing OpenSSH for more details. Set up SSH access for an administrative user. See Microsoft's documentation on the Administrative user to do this. Important The public key used in the instructions must correspond to the private key you create later in the WMCO namespace that holds your secret. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. You must create a new firewall rule in the Windows VM that allows incoming connections for container logs. Run the following PowerShell command to create the firewall rule on TCP port 10250: PS C:\> New-NetFirewallRule -DisplayName "ContainerLogsPort" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow Clone the Windows VM so it is a reusable image. Follow the VMware documentation on how to clone an existing virtual machine for more details. In the cloned Windows VM, run the Windows Sysprep tool : C:\> C:\Windows\System32\Sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1 1 Specify the path to your unattend.xml file. Note There is a limit on how many times you can run the sysprep command on a Windows image. Consult Microsoft's documentation for more information. An example unattend.xml is provided, which maintains all the changes needed for the WMCO. You must modify this example; it cannot be used directly. Example 6.1. Example unattend.xml <?xml version="1.0" encoding="UTF-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="specialize"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Security-SPP-UX" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-SQMApi" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass="oobeSystem"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend> 1 Specify the ComputerName , which must follow the Kubernetes' names specification . These specifications also apply to Guest OS customization performed on the resulting template while creating new VMs. 2 Disable the automatic logon to avoid the security issue of leaving an open terminal with Administrator privileges at boot. This is the default value and must not be changed. 3 Replace the MyPassword placeholder with the password for the Administrator account. This prevents the built-in Administrator account from having a blank password by default. Follow Microsoft's best practices for choosing a password . After the Sysprep tool has completed, the Windows VM will power off. You must not use or power on this VM anymore. Convert the Windows VM to a template in vCenter . 6.3.2.1.1. Additional resources Configuring a secret for the Windows Machine Config Operator VMware vSphere infrastructure requirements 6.3.2.2. Enabling communication with the internal API server for the WMCO on vSphere The Windows Machine Config Operator (WMCO) downloads the Ignition config files from the internal API server endpoint. You must enable communication with the internal API server so that your Windows virtual machine (VM) can download the Ignition config files, and the kubelet on the configured VM can only communicate with the internal API server. Prerequisites You have installed a cluster on vSphere. Procedure Add a new DNS entry for api-int.<cluster_name>.<base_domain> that points to the external API server URL api.<cluster_name>.<base_domain> . This can be a CNAME or an additional A record. Note The external API endpoint was already created as part of the initial cluster installation on vSphere. 6.3.3. Sample YAML for a Windows MachineSet object on vSphere This sample YAML defines a Windows MachineSet object running on VMware vSphere that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows compute machine set name. The compute machine set name cannot be more than 9 characters long, due to the way machine names are generated in vSphere. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the size of the vSphere Virtual Machine Disk (VMDK). Note This parameter does not set the size of the Windows partition. You can resize the Windows partition by using the unattend.xml file or by creating the vSphere Windows virtual machine (VM) golden image with the required disk size. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other Linux compute machines reside in the cluster. 11 Specify the full path of the Windows vSphere VM template to use, such as golden-images/windows-server-template . The name must be unique. Important Do not specify the original VM template. The VM template must remain off and must be cloned for new Windows machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. 12 The windows-user-data is created by the WMCO when the first Windows machine is configured. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 13 Specify the vCenter Datacenter to deploy the compute machine set on. 14 Specify the vCenter Datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Optional: Specify the vSphere resource pool for your Windows VMs. 17 Specify the vCenter server IP or fully qualified domain name. 6.3.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.3.5. Additional resources Overview of machine management 6.4. Creating a Windows machine set on GCP You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.4.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.4.2. Sample YAML for a Windows MachineSet object on GCP This sample YAML file defines a Windows MachineSet object running on Google Cloud Platform (GCP) that the Windows Machine Config Operator (WMCO) can use. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14 1 3 5 10 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone suffix (such as a ). 7 Configure the machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the full path to an image of a supported version of Windows Server. 11 Specify the GCP project that this cluster was created in. 12 Specify the GCP region, such as us-central1 . 13 Created by the WMCO when it configures the first Windows machine. After that, the windows-user-data is available for all subsequent machine sets to consume. 14 Specify the zone within the chosen region, such as us-central1-a . 6.4.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.4.4. Additional resources Overview of machine management
[ "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2022*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2019*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: \"\" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: \"<zone>\" 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "exclude-nics=", "C:\\> ipconfig", "PS C:\\> Get-Service -Name VMTools | Select Status, StartType", "PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow", "C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/windows_container_support_for_openshift/creating-windows-machine-sets
Chapter 44. Managing host groups using the IdM CLI
Chapter 44. Managing host groups using the IdM CLI Learn more about how to manage host groups and their members on the command line (CLI) by using the following operations: Viewing host groups and their members Creating host groups Deleting host groups Adding host group members Removing host group members Adding host group member managers Removing host group member managers 44.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 44.2. Viewing IdM host groups using the CLI Follow this procedure to view IdM host groups using the command line (CLI). Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Find all host groups using the ipa hostgroup-find command. To display all attributes of a host group, add the --all option. For example: 44.3. Creating IdM host groups using the CLI Follow this procedure to create IdM host groups using the command line (CLI). Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Add a host group using the ipa hostgroup-add command. For example, to create an IdM host group named group_name and give it a description: 44.4. Deleting IdM host groups using the CLI Follow this procedure to delete IdM host groups using the command line (CLI). Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Delete a host group using the ipa hostgroup-del command. For example, to delete the IdM host group named group_name : Note Removing a group does not delete the group members from IdM. 44.5. Adding IdM host group members using the CLI You can add hosts as well as host groups as members to an IdM host group using a single command. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Optional . Use the ipa hostgroup-find command to find hosts and host groups. Procedure To add a member to a host group, use the ipa hostgroup-add-member and provide the relevant information. You can specify the type of member to add using these options: Use the --hosts option to add one or more hosts to an IdM host group. For example, to add the host named example_member to the group named group_name : Use the --hostgroups option to add one or more host groups to an IdM host group. For example, to add the host group named nested_group to the group named group_name : You can add multiple hosts and multiple host groups to an IdM host group in one single command using the following syntax: Important When adding a host group as a member of another host group, do not create recursive groups. For example, if Group A is a member of Group B, do not add Group B as a member of Group A. Recursive groups can cause unpredictable behavior. 44.6. Removing IdM host group members using the CLI You can remove hosts as well as host groups from an IdM host group using a single command. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Optional . Use the ipa hostgroup-find command to confirm that the group includes the member you want to remove. Procedure To remove a host group member, use the ipa hostgroup-remove-member command and provide the relevant information. You can specify the type of member to remove using these options: Use the --hosts option to remove one or more hosts from an IdM host group. For example, to remove the host named example_member from the group named group_name : Use the --hostgroups option to remove one or more host groups from an IdM host group. For example, to remove the host group named nested_group from the group named group_name : Note Removing a group does not delete the group members from IdM. You can remove multiple hosts and multiple host groups from an IdM host group in one single command using the following syntax: 44.7. Adding IdM host group member managers using the CLI You can add hosts as well as host groups as member managers to an IdM host group using a single command. Member managers can add hosts or host groups to IdM host groups but cannot change the attributes of a host group. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Optional: Use the ipa hostgroup-find command to find hosts and host groups. To add a member manager to a host group, use the ipa hostgroup-add-member-manager . For example, to add the user named example_member as a member manager to the group named group_name : Use the --groups option to add one or more host groups as a member manager to an IdM host group. For example, to add the host group named admin_group as a member manager to the group named group_name : Note After you add a member manager to a host group, the update may take some time to spread to all clients in your Identity Management environment. Verification Using the ipa group-show command to verify the host user and host group were added as member managers. Additional resources See ipa hostgroup-add-member-manager --help for more details. See ipa hostgroup-show --help for more details. 44.8. Removing IdM host group member managers using the CLI You can remove hosts as well as host groups as member managers from an IdM host group using a single command. Member managers can remove hosts group member managers from IdM host groups but cannot change the attributes of a host group. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . You must have the name of the existing member manager host group you are removing and the name of the host group they are managing. Procedure Optional: Use the ipa hostgroup-find command to find hosts and host groups. To remove a member manager from a host group, use the ipa hostgroup-remove-member-manager command. For example, to remove the user named example_member as a member manager from the group named group_name : Use the --groups option to remove one or more host groups as a member manager from an IdM host group. For example, to remove the host group named nested_group as a member manager from the group named group_name : Note After you remove a member manager from a host group, the update may take some time to spread to all clients in your Identity Management environment. Verification Use the ipa group-show command to verify that the host user and host group were removed as member managers. Additional resources See ipa hostgroup-remove-member-manager --help for more details. See ipa hostgroup-show --help for more details.
[ "ipa hostgroup-find ------------------- 1 hostgroup matched ------------------- Host-group: ipaservers Description: IPA server hosts ---------------------------- Number of entries returned 1 ----------------------------", "ipa hostgroup-find --all ------------------- 1 hostgroup matched ------------------- dn: cn=ipaservers,cn=hostgroups,cn=accounts,dc=idm,dc=local Host-group: ipaservers Description: IPA server hosts Member hosts: xxx.xxx.xxx.xxx ipauniqueid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx objectclass: top, groupOfNames, nestedGroup, ipaobject, ipahostgroup ---------------------------- Number of entries returned 1 ----------------------------", "ipa hostgroup-add --desc ' My new host group ' group_name --------------------- Added hostgroup \"group_name\" --------------------- Host-group: group_name Description: My new host group ---------------------", "ipa hostgroup-del group_name -------------------------- Deleted hostgroup \"group_name\" --------------------------", "ipa hostgroup-add-member group_name --hosts example_member Host-group: group_name Description: My host group Member hosts: example_member ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-add-member group_name --hostgroups nested_group Host-group: group_name Description: My host group Member host-groups: nested_group ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-add-member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }", "ipa hostgroup-remove-member group_name --hosts example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------", "ipa hostgroup-remove-member group_name --hostgroups example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------", "ipa hostgroup- remove -member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }", "ipa hostgroup-add-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-add-member-manager group_name --groups admin_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: admin_group Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Membership managed by groups: admin_group Membership managed by users: example_member", "ipa hostgroup-remove-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: nested_group --------------------------- Number of members removed 1 ---------------------------", "ipa hostgroup-remove-member-manager group_name --groups nested_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name --------------------------- Number of members removed 1 ---------------------------", "ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-host-groups-using-the-idm-cli_managing-users-groups-hosts
2.5. NetworkManager Tools
2.5. NetworkManager Tools Table 2.1. A Summary of NetworkManager Tools and Applications Application or Tool Description nmcli A command-line tool which enables users and scripts to interact with NetworkManager . Note that nmcli can be used on systems without a GUI such as servers to control all aspects of NetworkManager . It has the same functionality as GUI tools. nmtui A simple curses-based text user interface (TUI) for NetworkManager nm-connection-editor A graphical user interface tool for certain tasks not yet handled by the control-center utility such as configuring bonds and teaming connections. You can add, remove, and modify network connections stored by NetworkManager . To start it, enter nm-connection-editor in a terminal: control-center A graphical user interface tool provided by the GNOME Shell, available for desktop users. It incorporates a Network settings tool. To start it, press the Super key to enter the Activities Overview, type Network and then press Enter . The Network settings tool appears. network connection icon A graphical user interface tool provided by the GNOME Shell representing network connection states as reported by NetworkManager . The icon has multiple states that serve as visual indicators for the type of connection you are currently using.
[ "~]USD nm-connection-editor" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-networkmanager_tools
Chapter 3. View OpenShift Data Foundation Topology
Chapter 3. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/viewing-odf-topology_rhodf
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.5/providing-direct-documentation-feedback_openjdk
Chapter 27. Managing AMQ Streams
Chapter 27. Managing AMQ Streams Managing AMQ Streams requires performing various tasks to keep the Kafka clusters and associated resources running smoothly. Use oc commands to check the status of resources, configure maintenance windows for rolling updates, and leverage tools such as the AMQ Streams Drain Cleaner and Kafka Static Quota plugin to manage your deployment effectively. 27.1. Working with custom resources You can use oc commands to retrieve information and perform other operations on AMQ Streams custom resources. Using oc with the status subresource of a custom resource allows you to get the information about the resource. 27.1.1. Performing oc operations on custom resources Use oc commands, such as get , describe , edit , or delete , to perform operations on resource types. For example, oc get kafkatopics retrieves a list of all Kafka topics and oc get kafkas retrieves all deployed Kafka clusters. When referencing resource types, you can use both singular and plural names: oc get kafkas gets the same results as oc get kafka . You can also use the short name of the resource. Learning short names can save you time when managing AMQ Streams. The short name for Kafka is k , so you can also run oc get k to list all Kafka clusters. oc get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3 Table 27.1. Long and short names for each AMQ Streams resource AMQ Streams resource Long name Short name Kafka kafka k Kafka Topic kafkatopic kt Kafka User kafkauser ku Kafka Connect kafkaconnect kc Kafka Connector kafkaconnector kctr Kafka Mirror Maker kafkamirrormaker kmm Kafka Mirror Maker 2 kafkamirrormaker2 kmm2 Kafka Bridge kafkabridge kb Kafka Rebalance kafkarebalance kr 27.1.1.1. Resource categories Categories of custom resources can also be used in oc commands. All AMQ Streams custom resources belong to the category strimzi , so you can use strimzi to get all the AMQ Streams resources with one command. For example, running oc get strimzi lists all AMQ Streams custom resources in a given namespace. oc get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple The oc get strimzi -o name command returns all resource types and resource names. The -o name option fetches the output in the type/name format oc get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user You can combine this strimzi command with other commands. For example, you can pass it into a oc delete command to delete all resources in a single command. oc delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io "my-cluster" deleted kafkatopic.kafka.strimzi.io "kafka-apps" deleted kafkauser.kafka.strimzi.io "my-user" deleted Deleting all resources in a single operation might be useful, for example, when you are testing new AMQ Streams features. 27.1.1.2. Querying the status of sub-resources There are other values you can pass to the -o option. For example, by using -o yaml you get the output in YAML format. Using -o json will return it as JSON. You can see all the options in oc get --help . One of the most useful options is the JSONPath support , which allows you to pass JSONPath expressions to query the Kubernetes API. A JSONPath expression can extract or navigate specific parts of any resource. For example, you can use the JSONPath expression {.status.listeners[?(@.name=="tls")].bootstrapServers} to get the bootstrap address from the status of the Kafka custom resource and use it in your Kafka clients. Here, the command finds the bootstrapServers value of the listener named tls : oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="tls")].bootstrapServers}{"\n"}' my-cluster-kafka-bootstrap.myproject.svc:9093 By changing the name condition you can also get the address of the other Kafka listeners. You can use jsonpath to extract any other property or group of properties from any custom resource. 27.1.2. AMQ Streams custom resource status information Status properties provide status information for certain custom resources. The following table lists the custom resources that provide status information (when deployed) and the schemas that define the status properties. For more information on the schemas, see the AMQ Streams Custom Resource API Reference . Table 27.2. Custom resources that provide status information AMQ Streams resource Schema reference Publishes status information on... Kafka KafkaStatus schema reference The Kafka cluster KafkaTopic KafkaTopicStatus schema reference Kafka topics in the Kafka cluster KafkaUser KafkaUserStatus schema reference Kafka users in the Kafka cluster KafkaConnect KafkaConnectStatus schema reference The Kafka Connect cluster KafkaConnector KafkaConnectorStatus schema reference KafkaConnector resources KafkaMirrorMaker2 KafkaMirrorMaker2Status schema reference The Kafka MirrorMaker 2 cluster KafkaMirrorMaker KafkaMirrorMakerStatus schema reference The Kafka MirrorMaker cluster KafkaBridge KafkaBridgeStatus schema reference The AMQ Streams Kafka Bridge KafkaRebalance KafkaRebalance schema reference The status and results of a rebalance The status property of a resource provides information on the state of the resource. The status.conditions and status.observedGeneration properties are common to all resources. status.conditions Status conditions describe the current state of a resource. Status condition properties are useful for tracking progress related to the resource achieving its desired state , as defined by the configuration specified in its spec . Status condition properties provide the time and reason the state of the resource changed, and details of events preventing or delaying the operator from realizing the desired state. status.observedGeneration Last observed generation denotes the latest reconciliation of the resource by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation ((the current version of the deployment), the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource. The status properties also provide resource-specific information. For example, KafkaStatus provides information on listener addresses, and the ID of the Kafka cluster. AMQ Streams creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit , for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster. Here we see the status properties for a Kafka custom resource. Kafka custom resource status apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # ... status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain type: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls type: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external2 type: external2 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external1 type: external1 observedGeneration: 3 5 1 The Kafka cluster ID. 2 Status conditions describe the current state of the Kafka cluster. 3 The Ready condition indicates that the Cluster Operator considers the Kafka cluster able to handle traffic. 4 The listeners describe Kafka bootstrap addresses by type. 5 The observedGeneration value indicates the last reconciliation of the Kafka custom resource by the Cluster Operator. Note The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a Ready state. Accessing status information You can access status information for a resource from the command line. For more information, see Section 27.1.3, "Finding the status of a custom resource" . 27.1.3. Finding the status of a custom resource This procedure describes how to find the status of a custom resource. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Specify the custom resource and use the -o jsonpath option to apply a standard JSONPath expression to select the status property: oc get kafka <kafka_resource_name> -o jsonpath='{.status}' This expression returns all the status information for the specified custom resource. You can use dot notation, such as status.listeners or status.observedGeneration , to fine-tune the status information you wish to see. Additional resources Section 27.1.2, "AMQ Streams custom resource status information" For more information about using JSONPath, see JSONPath support . 27.2. Discovering services using labels and annotations Service discovery makes it easier for client applications running in the same OpenShift cluster as AMQ Streams to interact with a Kafka cluster. A service discovery label and annotation is generated for services used to access the Kafka cluster: Internal Kafka bootstrap service HTTP Bridge service The label helps to make the service discoverable, and the annotation provides connection details that a client application can use to make the connection. The service discovery label, strimzi.io/discovery , is set as true for the Service resources. The service discovery annotation has the same key, providing connection details in JSON format for each service. Example internal Kafka bootstrap service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 9092, "tls" : false, "protocol" : "kafka", "auth" : "scram-sha-512" }, { "port" : 9093, "tls" : true, "protocol" : "kafka", "auth" : "tls" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: "true" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #... Example HTTP Bridge service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 8080, "tls" : false, "auth" : "none", "protocol" : "http" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: "true" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service 27.2.1. Returning connection details on services You can find the services by specifying the discovery label when fetching services from the command line or a corresponding API call. oc get service -l strimzi.io/discovery=true The connection details are returned when retrieving the service discovery label. 27.3. Connecting to ZooKeeper from a terminal ZooKeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of AMQ Streams. However, if you want to use CLI tools that require a connection to ZooKeeper, you can use a terminal inside a ZooKeeper pod and connect to localhost:12181 as the ZooKeeper address. Prerequisites An OpenShift cluster is available. A Kafka cluster is running. The Cluster Operator is running. Procedure Open the terminal using the OpenShift console or run the exec command from your CLI. For example: oc exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls / Be sure to use localhost:12181 . 27.4. Pausing reconciliation of custom resources Sometimes it is useful to pause the reconciliation of custom resources managed by AMQ Streams Operators, so that you can perform fixes or make updates. If reconciliations are paused, any changes made to custom resources are ignored by the Operators until the pause ends. If you want to pause reconciliation of a custom resource, set the strimzi.io/pause-reconciliation annotation to true in its configuration. This instructs the appropriate Operator to pause reconciliation of the custom resource. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused. You can also create a custom resource with the pause annotation enabled. The custom resource is created, but it is ignored. Prerequisites The AMQ Streams Operator that manages the custom resource is running. Procedure Annotate the custom resource in OpenShift, setting pause-reconciliation to true : oc annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation="true" For example, for the KafkaConnect custom resource: oc annotate KafkaConnect my-connect strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe <kind_of_custom_resource> <name_of_custom_resource> The type condition changes to ReconciliationPaused at the lastTransitionTime . Example custom resource with a paused reconciliation condition type apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: "true" strimzi.io/use-connector-resources: "true" creationTimestamp: 2021-03-12T10:47:11Z #... spec: # ... status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: "True" type: ReconciliationPaused Resuming from pause To resume reconciliation, you can set the annotation to false , or remove the annotation. Additional resources Finding the status of a custom resource 27.5. Maintenance time windows for rolling updates Maintenance time windows allow you to schedule certain rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. 27.5.1. Maintenance time windows overview In most cases, the Cluster Operator only updates your Kafka or ZooKeeper clusters in response to changes to the corresponding Kafka resource. This enables you to plan when to apply changes to a Kafka resource to minimize the impact on Kafka client applications. However, some updates to your Kafka and ZooKeeper clusters can happen without any corresponding change to the Kafka resource. For example, the Cluster Operator will need to perform a rolling restart if a CA (certificate authority) certificate that it manages is close to expiry. While a rolling restart of the pods should not affect availability of the service (assuming correct broker and topic configurations), it could affect performance of the Kafka client applications. Maintenance time windows allow you to schedule such spontaneous rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. If maintenance time windows are not configured for a cluster then it is possible that such spontaneous rolling updates will happen at an inconvenient time, such as during a predictable period of high load. 27.5.2. Maintenance time window definition You configure maintenance time windows by entering an array of strings in the Kafka.spec.maintenanceTimeWindows property. Each string is a cron expression interpreted as being in UTC (Coordinated Universal Time, which for practical purposes is the same as Greenwich Mean Time). The following example configures a single maintenance time window that starts at midnight and ends at 01:59am (UTC), on Sundays, Mondays, Tuesdays, Wednesdays, and Thursdays: # ... maintenanceTimeWindows: - "* * 0-1 ? * SUN,MON,TUE,WED,THU *" # ... In practice, maintenance windows should be set in conjunction with the Kafka.spec.clusterCa.renewalDays and Kafka.spec.clientsCa.renewalDays properties of the Kafka resource, to ensure that the necessary CA certificate renewal can be completed in the configured maintenance time windows. Note AMQ Streams does not schedule maintenance operations exactly according to the given windows. Instead, for each reconciliation, it checks whether a maintenance window is currently "open". This means that the start of maintenance operations within a given time window can be delayed by up to the Cluster Operator reconciliation interval. Maintenance time windows must therefore be at least this long. 27.5.3. Configuring a maintenance time window You can configure a maintenance time window for rolling updates triggered by supported processes. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Add or edit the maintenanceTimeWindows property in the Kafka resource. For example to allow maintenance between 0800 and 1059 and between 1400 and 1559 you would set the maintenanceTimeWindows as shown below: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... maintenanceTimeWindows: - "* * 8-10 * * ?" - "* * 14-15 * * ?" Create or update the resource: oc apply -f <kafka_configuration_file> Additional resources Section 27.9.1, "Performing a rolling update using a pod management annotation" Section 27.9.2, "Performing a rolling update using a pod annotation" 27.6. Evicting pods with the AMQ Streams Drain Cleaner Kafka and ZooKeeper pods might be evicted during OpenShift upgrades, maintenance, or pod rescheduling. If your Kafka broker and ZooKeeper pods were deployed by AMQ Streams, you can use the AMQ Streams Drain Cleaner tool to handle the pod evictions. The AMQ Streams Drain Cleaner handles the eviction instead of OpenShift. You must set the podDisruptionBudget for your Kafka deployment to 0 (zero). OpenShift will then no longer be allowed to evict the pod automatically. By deploying the AMQ Streams Drain Cleaner, you can use the Cluster Operator to move Kafka pods instead of OpenShift. The Cluster Operator ensures that topics are never under-replicated. Kafka can remain operational during the eviction process. The Cluster Operator waits for topics to synchronize, as the OpenShift worker nodes drain consecutively. An admission webhook notifies the AMQ Streams Drain Cleaner of pod eviction requests to the Kubernetes API. The AMQ Streams Drain Cleaner then adds a rolling update annotation to the pods to be drained. This informs the Cluster Operator to perform a rolling update of an evicted pod. Note If you are not using the AMQ Streams Drain Cleaner, you can add pod annotations to perform rolling updates manually . Webhook configuration The AMQ Streams Drain Cleaner deployment files include a ValidatingWebhookConfiguration resource file. The resource provides the configuration for registering the webhook with the Kubernetes API. The configuration defines the rules for the Kubernetes API to follow in the event of a pod eviction request. The rules specify that only CREATE operations related to pods/eviction sub-resources are intercepted. If these rules are met, the API forwards the notification. The clientConfig points to the AMQ Streams Drain Cleaner service and /drainer endpoint that exposes the webhook. The webhook uses a secure TLS connection, which requires authentication. The caBundle property specifies the certificate chain to validate HTTPS communication. Certificates are encoded in Base64. Webhook configuration for pod eviction notifications apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration # ... webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [""] apiVersions: ["v1"] operations: ["CREATE"] resources: ["pods/eviction"] scope: "Namespaced" clientConfig: service: namespace: "strimzi-drain-cleaner" name: "strimzi-drain-cleaner" path: /drainer port: 443 caBundle: Cg== # ... 27.6.1. Downloading the AMQ Streams Drain Cleaner deployment files To deploy and use the AMQ Streams Drain Cleaner, you need to download the deployment files. The AMQ Streams Drain Cleaner deployment files are available from the AMQ Streams software downloads page . 27.6.2. Deploying the AMQ Streams Drain Cleaner using installation files Deploy the AMQ Streams Drain Cleaner to the OpenShift cluster where the Cluster Operator and Kafka cluster are running. AMQ Streams sets a default PodDisruptionBudget (PDB) that allows only one Kafka or ZooKeeper pod to be unavailable at any given time. To use the Drain Cleaner for planned maintenance or upgrades, you must set a PDB of zero. This is to prevent voluntary evictions of pods, and ensure that the Kafka or ZooKeeper cluster remains available. You do this by setting the maxUnavailable value to zero in the Kafka or ZooKeeper template. StrimziPodSet custom resources manage Kafka and ZooKeeper pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is converted to a minAvailable value. For example, if there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Prerequisites You have downloaded the AMQ Streams Drain Cleaner deployment files . You have a highly available Kafka cluster deployment running with OpenShift worker nodes that you would like to update. Topics are replicated for high availability. Topic configuration specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... Excluding Kafka or ZooKeeper If you don't want to include Kafka or ZooKeeper pods in Drain Cleaner operations, change the default environment variables in the Drain Cleaner Deployment configuration file. Set STRIMZI_DRAIN_KAFKA to false to exclude Kafka pods Set STRIMZI_DRAIN_ZOOKEEPER to false to exclude ZooKeeper pods Example configuration to exclude ZooKeeper pods apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # ... env: - name: STRIMZI_DRAIN_KAFKA value: "true" - name: STRIMZI_DRAIN_ZOOKEEPER value: "false" # ... Procedure Set maxUnavailable to 0 (zero) in the Kafka and ZooKeeper sections of the Kafka resource using template settings. Specifying a pod disruption budget apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # ... zookeeper: template: podDisruptionBudget: maxUnavailable: 0 # ... This setting prevents the automatic eviction of pods in case of planned disruptions, leaving the AMQ Streams Drain Cleaner and Cluster Operator to roll the pods on different worker nodes. Add the same configuration for ZooKeeper if you want to use AMQ Streams Drain Cleaner to drain ZooKeeper nodes. Update the Kafka resource: oc apply -f <kafka_configuration_file> Deploy the AMQ Streams Drain Cleaner. To run the Drain Cleaner on OpenShift, apply the resources in the /install/drain-cleaner/openshift directory. oc apply -f ./install/drain-cleaner/openshift 27.6.3. Using the AMQ Streams Drain Cleaner Use the AMQ Streams Drain Cleaner in combination with the Cluster Operator to move Kafka broker or ZooKeeper pods from nodes that are being drained. When you run the AMQ Streams Drain Cleaner, it annotates pods with a rolling update pod annotation. The Cluster Operator performs rolling updates based on the annotation. Prerequisites You have deployed the AMQ Streams Drain Cleaner . Procedure Drain a specified OpenShift node hosting the Kafka broker or ZooKeeper pods. oc get nodes oc drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force Check the eviction events in the AMQ Streams Drain Cleaner log to verify that the pods have been annotated for restart. AMQ Streams Drain Cleaner log show annotations of pods INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart Check the reconciliation events in the Cluster Operator log to verify the rolling updates. Cluster Operator log shows rolling updates INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled 27.6.4. Watching the TLS certificates used by the AMQ Streams Drain Cleaner By default, the Drain Cleaner deployment watches the secret containing the TLS certificates its uses for authentication. The Drain Cleaner watches for changes, such as certificate renewals. If it detects a change, it restarts to reload the TLS certificates. The Drain Cleaner installation files enable this behavior by default. But you can disable the watching of certificates by setting the STRIMZI_CERTIFICATE_WATCH_ENABLED environment variable to false in the Deployment configuration ( 060-Deployment.yaml ) of the Drain Cleaner installation files. With STRIMZI_CERTIFICATE_WATCH_ENABLED enabled, you can also use the following environment variables for watching TLS certificates. Table 27.3. Drain Cleaner environment variables for watching TLS certificates Environment Variable Description Default STRIMZI_CERTIFICATE_WATCH_ENABLED Enables or disables the certificate watch false STRIMZI_CERTIFICATE_WATCH_NAMESPACE The namespace where the Drain Cleaner is deployed and where the certificate secret exists strimzi-drain-cleaner STRIMZI_CERTIFICATE_WATCH_POD_NAME The Drain Cleaner pod name - STRIMZI_CERTIFICATE_WATCH_SECRET_NAME The name of the secret containing TLS certificates strimzi-drain-cleaner STRIMZI_CERTIFICATE_WATCH_SECRET_KEYS The list of fields inside the secret that contain the TLS certificates tls.crt, tls.key Example environment variable configuration to control watch operations apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-drain-cleaner labels: app: strimzi-drain-cleaner namespace: strimzi-drain-cleaner spec: # ... spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # ... env: - name: STRIMZI_DRAIN_KAFKA value: "true" - name: STRIMZI_DRAIN_ZOOKEEPER value: "true" - name: STRIMZI_CERTIFICATE_WATCH_ENABLED value: "true" - name: STRIMZI_CERTIFICATE_WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_CERTIFICATE_WATCH_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # ... Tip Use the Downward API mechanism to configure STRIMZI_CERTIFICATE_WATCH_NAMESPACE and STRIMZI_CERTIFICATE_WATCH_POD_NAME . 27.7. Deleting Kafka nodes using annotations This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod on which the Kafka broker is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss and the availability of your cluster cannot be guaranteed. The following procedure should only be performed if you have encountered storage issues. Prerequisites A running Cluster Operator Procedure Find the name of the Pod that you want to delete. Kafka broker pods are named <cluster-name> -kafka- <index> , where <index> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0 . Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -kafka- index strimzi.io/delete-pod-and-pvc=true Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 27.8. Deleting ZooKeeper nodes using annotations This procedure describes how to delete an existing ZooKeeper node by using an OpenShift annotation. Deleting a ZooKeeper node consists of deleting both the Pod on which ZooKeeper is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss and the availability of your cluster cannot be guaranteed. The following procedure should only be performed if you have encountered storage issues. Prerequisites A running Cluster Operator Procedure Find the name of the Pod that you want to delete. ZooKeeper pods are named <cluster-name> -zookeeper- <index> , where <index> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-zookeeper-0 . Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -zookeeper- index strimzi.io/delete-pod-and-pvc=true Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 27.9. Starting rolling updates of Kafka and ZooKeeper clusters using annotations AMQ Streams supports the use of annotations on resources to manually trigger a rolling update of Kafka and ZooKeeper clusters through the Cluster Operator. Rolling updates restart the pods of the resource with new ones. Manually performing a rolling update on a specific pod or set of pods is usually only required in exceptional circumstances. However, rather than deleting the pods directly, if you perform the rolling update through the Cluster Operator you ensure the following: The manual deletion of the pod does not conflict with simultaneous Cluster Operator operations, such as deleting other pods in parallel. The Cluster Operator logic handles the Kafka configuration specifications, such as the number of in-sync replicas. 27.9.1. Performing a rolling update using a pod management annotation This procedure describes how to trigger a rolling update of a Kafka cluster or ZooKeeper cluster. To trigger the update, you add an annotation to the StrimziPodSet that manages the pods running on the cluster. Prerequisites To perform a manual rolling update, you need a running Cluster Operator and Kafka cluster. Procedure Find the name of the resource that controls the Kafka or ZooKeeper pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding names are my-cluster-kafka and my-cluster-zookeeper . Use oc annotate to annotate the appropriate resource in OpenShift. Annotating a StrimziPodSet oc annotate strimzipodset <cluster_name> -kafka strimzi.io/manual-rolling-update=true oc annotate strimzipodset <cluster_name> -zookeeper strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated resource is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the resource. 27.9.2. Performing a rolling update using a pod annotation This procedure describes how to manually trigger a rolling update of an existing Kafka cluster or ZooKeeper cluster using an OpenShift Pod annotation. When multiple pods are annotated, consecutive rolling updates are performed within the same reconciliation run. Prerequisites To perform a manual rolling update, you need a running Cluster Operator and Kafka cluster. You can perform a rolling update on a Kafka cluster regardless of the topic replication factor used. But for Kafka to stay operational during the update, you'll need the following: A highly available Kafka cluster deployment running with nodes that you wish to update. Topics replicated for high availability. Topic configuration specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... Procedure Find the name of the Kafka or ZooKeeper Pod you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding Pod names are my-cluster-kafka-index and my-cluster-zookeeper-index . The index starts at zero and ends at the total number of replicas minus one. Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true oc annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of the annotated Pod is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of a pod is complete, the annotation is removed from the Pod . 27.10. Performing restarts of MirrorMaker 2 connectors using annotations This procedure describes how to manually trigger a restart of a Kafka MirrorMaker 2 connector by using an OpenShift annotation. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the Kafka MirrorMaker 2 connector you want to restart: oc get KafkaMirrorMaker2 Find the name of the Kafka MirrorMaker 2 connector to be restarted from the KafkaMirrorMaker2 custom resource. oc describe KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME To restart the connector, annotate the KafkaMirrorMaker2 resource in OpenShift. In this example, oc annotate restarts a connector named my-source->my-target.MirrorSourceConnector : oc annotate KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME "strimzi.io/restart-connector=my-source->my-target.MirrorSourceConnector" Wait for the reconciliation to occur (every two minutes by default). The Kafka MirrorMaker 2 connector is restarted, as long as the annotation was detected by the reconciliation process. When the restart request is accepted, the annotation is removed from the KafkaMirrorMaker2 custom resource. Additional resources Kafka MirrorMaker 2 cluster configuration . 27.11. Performing restarts of MirrorMaker 2 connector task using annotations This procedure describes how to manually trigger a restart of a Kafka MirrorMaker 2 connector task by using an OpenShift annotation. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaMirrorMaker2 custom resource that controls the Kafka MirrorMaker 2 connector you want to restart: oc get KafkaMirrorMaker2 Find the name of the Kafka MirrorMaker 2 connector and the ID of the task to be restarted from the KafkaMirrorMaker2 custom resource. Task IDs are non-negative integers, starting from 0. oc describe KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME To restart the connector task, annotate the KafkaMirrorMaker2 resource in OpenShift. In this example, oc annotate restarts task 0 of a connector named my-source->my-target.MirrorSourceConnector : oc annotate KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME "strimzi.io/restart-connector-task=my-source->my-target.MirrorSourceConnector:0" Wait for the reconciliation to occur (every two minutes by default). The Kafka MirrorMaker 2 connector task is restarted, as long as the annotation was detected by the reconciliation process. When the restart task request is accepted, the annotation is removed from the KafkaMirrorMaker2 custom resource. Additional resources Kafka MirrorMaker 2 cluster configuration . 27.12. Recovering a cluster from persistent volumes You can recover a Kafka cluster from persistent volumes (PVs) if they are still present. You might want to do this, for example, after: A namespace was deleted unintentionally A whole OpenShift cluster is lost, but the PVs remain in the infrastructure 27.12.1. Recovery from namespace deletion Recovery from namespace deletion is possible because of the relationship between persistent volumes and namespaces. A PersistentVolume (PV) is a storage resource that lives outside of a namespace. A PV is mounted into a Kafka pod using a PersistentVolumeClaim (PVC), which lives inside a namespace. The reclaim policy for a PV tells a cluster how to act when a namespace is deleted. If the reclaim policy is set as: Delete (default), PVs are deleted when PVCs are deleted within a namespace Retain , PVs are not deleted when a namespace is deleted To ensure that you can recover from a PV if a namespace is deleted unintentionally, the policy must be reset from Delete to Retain in the PV specification using the persistentVolumeReclaimPolicy property: apiVersion: v1 kind: PersistentVolume # ... spec: # ... persistentVolumeReclaimPolicy: Retain Alternatively, PVs can inherit the reclaim policy of an associated storage class. Storage classes are used for dynamic volume allocation. By configuring the reclaimPolicy property for the storage class, PVs that use the storage class are created with the appropriate reclaim policy. The storage class is configured for the PV using the storageClassName property. apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # ... # ... reclaimPolicy: Retain apiVersion: v1 kind: PersistentVolume # ... spec: # ... storageClassName: gp2-retain Note If you are using Retain as the reclaim policy, but you want to delete an entire cluster, you need to delete the PVs manually. Otherwise they will not be deleted, and may cause unnecessary expenditure on resources. 27.12.2. Recovery from loss of an OpenShift cluster When a cluster is lost, you can use the data from disks/volumes to recover the cluster if they were preserved within the infrastructure. The recovery procedure is the same as with namespace deletion, assuming PVs can be recovered and they were created manually. 27.12.3. Recovering a deleted cluster from persistent volumes This procedure describes how to recover a deleted cluster from persistent volumes (PVs). In this situation, the Topic Operator identifies that topics exist in Kafka, but the KafkaTopic resources do not exist. When you get to the step to recreate your cluster, you have two options: Use Option 1 when you can recover all KafkaTopic resources. The KafkaTopic resources must therefore be recovered before the cluster is started so that the corresponding topics are not deleted by the Topic Operator. Use Option 2 when you are unable to recover all KafkaTopic resources. In this case, you deploy your cluster without the Topic Operator, delete the Topic Operator topic store metadata, and then redeploy the Kafka cluster with the Topic Operator so it can recreate the KafkaTopic resources from the corresponding topics. Note If the Topic Operator is not deployed, you only need to recover the PersistentVolumeClaim (PVC) resources. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see Persistent storage . Note The procedure does not include recovery of KafkaUser resources, which must be recreated manually. If passwords and certificates need to be retained, secrets must be recreated before creating the KafkaUser resources. Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example output showing columns important to this procedure: NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2 NAME shows the name of each PV. RECLAIM POLICY shows that PVs are retained . CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. For example: apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator. oc create -f install/cluster-operator -n my-project Recreate your cluster. Follow the steps depending on whether or not you have all the KafkaTopic resources needed to recreate your cluster. Option 1 : If you have all the KafkaTopic resources that existed before you lost your cluster, including internal topics such as committed offsets from __consumer_offsets : Recreate all KafkaTopic resources. It is essential that you recreate the resources before deploying the cluster, or the Topic Operator will delete the topics. Deploy the Kafka cluster. For example: oc apply -f kafka.yaml Option 2 : If you do not have all the KafkaTopic resources that existed before you lost your cluster: Deploy the Kafka cluster, as with the first option, but without the Topic Operator by removing the topicOperator property from the Kafka resource before deploying. If you include the Topic Operator in the deployment, the Topic Operator will delete all the topics. Delete the internal topic store topics from the Kafka cluster: oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. Enable the Topic Operator by redeploying the Kafka cluster with the topicOperator property to recreate the KafkaTopic resources. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} 1 #... 1 Here we show the default configuration, which has no additional properties. You specify the required configuration using the properties described in the EntityTopicOperatorSpec schema reference . Verify the recovery by listing the KafkaTopic resources: oc get KafkaTopic 27.13. Uninstalling AMQ Streams You can uninstall AMQ Streams on OpenShift 4.12 and later from the OperatorHub using the OpenShift Container Platform web console or CLI. Use the same approach you used to install AMQ Streams. When you uninstall AMQ Streams, you will need to identify resources created specifically for a deployment and referenced from the AMQ Streams resource. Such resources include: Secrets (Custom CAs and certificates, Kafka Connect secrets, and other Kafka secrets) Logging ConfigMaps (of type external ) These are resources referenced by Kafka , KafkaConnect , KafkaMirrorMaker , or KafkaBridge configuration. Warning Deleting CustomResourceDefinitions results in the garbage collection of the corresponding custom resources ( Kafka , KafkaConnect , KafkaMirrorMaker , or KafkaBridge ) and the resources dependent on them (Deployments, StatefulSets, and other dependent resources). 27.13.1. Uninstalling AMQ Streams from the OperatorHub using the web console This procedure describes how to uninstall AMQ Streams from the OperatorHub and remove resources related to the deployment. You can perform the steps from the console or use alternative CLI commands. Prerequisites Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled AMQ Streams. Command to find resources related to an AMQ Streams deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Navigate in the OpenShift web console to Operators > Installed Operators . For the installed AMQ Streams operator, select the options icon (three vertical dots) and click Uninstall Operator . The operator is removed from Installed Operators . Navigate to Home > Projects and select the project where you installed AMQ Streams and the Kafka components. Click the options under Inventory to delete related resources. Resources include the following: Deployments StatefulSets Pods Services ConfigMaps Secrets Tip Use the search to find related resources that begin with the name of the Kafka cluster. You can also find the resources under Workloads . Alternative CLI commands You can use CLI commands to uninstall AMQ Streams from the OperatorHub. Delete the AMQ Streams subscription. oc delete subscription amq-streams -n openshift-operators Delete the cluster service version (CSV). oc delete csv amqstreams. <version> -n openshift-operators Remove related CRDs. oc get crd -l app=strimzi -o name | xargs oc delete 27.13.2. Uninstalling AMQ Streams using the CLI This procedure describes how to use the oc command-line tool to uninstall AMQ Streams and remove resources related to the deployment. Prerequisites Access to an OpenShift cluster using an account with cluster-admin or strimzi-admin permissions. You have identified the resources to be deleted. You can use the following oc CLI command to find resources and also verify that they have been removed when you have uninstalled AMQ Streams. Command to find resources related to an AMQ Streams deployment oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource you are checking, such as secret or configmap . Procedure Delete the Cluster Operator Deployment , related CustomResourceDefinitions , and RBAC resources. Specify the installation files used to deploy the Cluster Operator. oc delete -f install/cluster-operator Delete the resources you identified in the prerequisites. oc delete <resource_type> <resource_name> -n <namespace> Replace <resource_type> with the type of resource you are deleting and <resource_name> with the name of the resource. Example to delete a secret oc delete secret my-cluster-clients-ca-cert -n my-project 27.14. Frequently asked questions 27.14.1. Questions related to the Cluster Operator 27.14.1.1. Why do I need cluster administrator privileges to install AMQ Streams? To install AMQ Streams, you need to be able to create the following cluster-scoped resources: Custom Resource Definitions (CRDs) to instruct OpenShift about resources that are specific to AMQ Streams, such as Kafka and KafkaConnect ClusterRoles and ClusterRoleBindings Cluster-scoped resources, which are not scoped to a particular OpenShift namespace, typically require cluster administrator privileges to install. As a cluster administrator, you can inspect all the resources being installed (in the /install/ directory) to ensure that the ClusterRoles do not grant unnecessary privileges. After installation, the Cluster Operator runs as a regular Deployment , so any standard (non-admin) OpenShift user with privileges to access the Deployment can configure it. The cluster administrator can grant standard users the privileges necessary to manage Kafka custom resources. See also: Why does the Cluster Operator need to create ClusterRoleBindings ? Can standard OpenShift users create Kafka custom resources? 27.14.1.2. Why does the Cluster Operator need to create ClusterRoleBindings ? OpenShift has built-in privilege escalation prevention , which means that the Cluster Operator cannot grant privileges it does not have itself, specifically, it cannot grant such privileges in a namespace it cannot access. Therefore, the Cluster Operator must have the privileges necessary for all the components it orchestrates. The Cluster Operator needs to be able to grant access so that: The Topic Operator can manage KafkaTopics , by creating Roles and RoleBindings in the namespace that the operator runs in The User Operator can manage KafkaUsers , by creating Roles and RoleBindings in the namespace that the operator runs in The failure domain of a Node is discovered by AMQ Streams, by creating a ClusterRoleBinding When using rack-aware partition assignment, the broker pod needs to be able to get information about the Node it is running on, for example, the Availability Zone in Amazon AWS. A Node is a cluster-scoped resource, so access to it can only be granted through a ClusterRoleBinding , not a namespace-scoped RoleBinding . 27.14.1.3. Can standard OpenShift users create Kafka custom resources? By default, standard OpenShift users will not have the privileges necessary to manage the custom resources handled by the Cluster Operator. The cluster administrator can grant a user the necessary privileges using OpenShift RBAC resources. For more information, see Section 4.5, "Designating AMQ Streams administrators" . 27.14.1.4. What do the failed to acquire lock warnings in the log mean? For each cluster, the Cluster Operator executes only one operation at a time. The Cluster Operator uses locks to make sure that there are never two parallel operations running for the same cluster. Other operations must wait until the current operation completes before the lock is released. INFO Examples of cluster operations include cluster creation , rolling update , scale down , and scale up . If the waiting time for the lock takes too long, the operation times out and the following warning message is printed to the log: 2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster Depending on the exact configuration of STRIMZI_FULL_RECONCILIATION_INTERVAL_MS and STRIMZI_OPERATION_TIMEOUT_MS , this warning message might appear occasionally without indicating any underlying issues. Operations that time out are picked up in the periodic reconciliation, so that the operation can acquire the lock and execute again. Should this message appear periodically, even in situations when there should be no other operations running for a given cluster, it might indicate that the lock was not properly released due to an error. If this is the case, try restarting the Cluster Operator. 27.14.1.5. Why is hostname verification failing when connecting to NodePorts using TLS? Currently, off-cluster access using NodePorts with TLS encryption enabled does not support TLS hostname verification. As a result, the clients that verify the hostname will fail to connect. For example, the Java client will fail with the following exception: Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more To connect, you must disable hostname verification. In the Java client, you can do this by setting the configuration option ssl.endpoint.identification.algorithm to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", "");
[ "get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3", "get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple", "get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user", "delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain type: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls type: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external2 type: external2 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external1 type: external1 observedGeneration: 3 5", "get kafka <kafka_resource_name> -o jsonpath='{.status}'", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service", "get service -l strimzi.io/discovery=true", "exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /", "annotate <kind_of_custom_resource> <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"", "describe <kind_of_custom_resource> <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused", "maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # maintenanceTimeWindows: - \"* * 8-10 * * ?\" - \"* * 14-15 * * ?\"", "apply -f <kafka_configuration_file>", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] operations: [\"CREATE\"] resources: [\"pods/eviction\"] scope: \"Namespaced\" clientConfig: service: namespace: \"strimzi-drain-cleaner\" name: \"strimzi-drain-cleaner\" path: /drainer port: 443 caBundle: Cg== #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"false\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # zookeeper: template: podDisruptionBudget: maxUnavailable: 0 #", "apply -f <kafka_configuration_file>", "apply -f ./install/drain-cleaner/openshift", "get nodes drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force", "INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart", "INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-drain-cleaner labels: app: strimzi-drain-cleaner namespace: strimzi-drain-cleaner spec: # spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # env: - name: STRIMZI_DRAIN_KAFKA value: \"true\" - name: STRIMZI_DRAIN_ZOOKEEPER value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_ENABLED value: \"true\" - name: STRIMZI_CERTIFICATE_WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_CERTIFICATE_WATCH_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name #", "annotate pod cluster-name -kafka- index strimzi.io/delete-pod-and-pvc=true", "annotate pod cluster-name -zookeeper- index strimzi.io/delete-pod-and-pvc=true", "annotate strimzipodset <cluster_name> -kafka strimzi.io/manual-rolling-update=true annotate strimzipodset <cluster_name> -zookeeper strimzi.io/manual-rolling-update=true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #", "annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME", "annotate KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME \"strimzi.io/restart-connector=my-source->my-target.MirrorSourceConnector\"", "get KafkaMirrorMaker2", "describe KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME", "annotate KafkaMirrorMaker2 KAFKAMIRRORMAKER-2-NAME \"strimzi.io/restart-connector-task=my-source->my-target.MirrorSourceConnector:0\"", "apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain", "apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain", "apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain", "get pv", "NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2", "create namespace myproject", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c", "apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem", "claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea", "create -f install/cluster-operator -n my-project", "apply -f kafka.yaml", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #", "get KafkaTopic", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete subscription amq-streams -n openshift-operators", "delete csv amqstreams. <version> -n openshift-operators", "get crd -l app=strimzi -o name | xargs oc delete", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "delete -f install/cluster-operator", "delete <resource_type> <resource_name> -n <namespace>", "delete secret my-cluster-clients-ca-cert -n my-project", "2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/management-tasks-str
Server Administration Guide
Server Administration Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/index
Chapter 9. Accessing monitoring APIs by using the CLI
Chapter 9. Accessing monitoring APIs by using the CLI In Red Hat OpenShift Service on AWS, you can access web service APIs for some monitoring components from the command line interface (CLI). Important In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations: Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. Do not try to retrieve all metrics data through the /federate endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. 9.1. About accessing monitoring web service APIs You can directly access web service API endpoints from the command line for the following monitoring stack components: Prometheus Alertmanager Thanos Ruler Thanos Querier Important To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have get permission on the namespaces resource, which can be granted by binding the cluster-monitoring-view cluster role to the account. When you access web service API endpoints for monitoring components, be aware of the following limitations: You can only use bearer token authentication to access API endpoints. You can only access endpoints in the /api path for a route. If you try to access an API endpoint in a web browser, an Application is not available error occurs. To access monitoring features in a web browser, use the Red Hat OpenShift Service on AWS web console to review monitoring dashboards. Additional resources Reviewing monitoring dashboards 9.2. Accessing a monitoring web service API The following example shows how to query the service API receivers for the Alertmanager service used in core platform monitoring. You can use a similar method to access the prometheus-k8s service for core platform Prometheus and the thanos-ruler service for Thanos Ruler. Prerequisites You are logged in to an account that is bound against the monitoring-alertmanager-edit role in the openshift-monitoring namespace. You are logged in to an account that has permission to get the Alertmanager API route. Note If your account does not have permission to get the Alertmanager API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Extract the alertmanager-main API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.status.ingress[].host}) Query the service API receivers for Alertmanager by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v2/receivers" 9.3. Querying metrics by using the federation endpoint for Prometheus You can use the federation endpoint for Prometheus to scrape platform and user-defined metrics from a network location outside the cluster. To do so, access the Prometheus /federate endpoint for the cluster via an Red Hat OpenShift Service on AWS route. Important A delay in retrieving metrics data occurs when you use federation. This delay can affect the accuracy and timeliness of the scraped metrics. Using the federation endpoint can also degrade the performance and scalability of your cluster, especially if you use the federation endpoint to retrieve large amounts of metrics data. To avoid these issues, follow these recommendations: Do not try to retrieve all metrics data via the federation endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. Avoid frequent querying of the federation endpoint for Prometheus. Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. Note You can only use bearer token authentication to access the Prometheus federation endpoint. You are logged in to an account that has permission to get the Prometheus federation route. Note If your account does not have permission to get the Prometheus federation route, a cluster administrator can provide the URL for the route. Procedure Retrieve the bearer token by running the following the command: USD TOKEN=USD(oc whoami -t) Get the Prometheus federation route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={.status.ingress[].host}) Query metrics from the /federate route. The following example command queries up metrics: USD curl -G -k -H "Authorization: Bearer USDTOKEN" https://USDHOST/federate --data-urlencode 'match[]=up' Example output # TYPE up untyped up{apiserver="kube-apiserver",endpoint="https",instance="10.0.143.148:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035322214 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.148.166:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035338597 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.173.16:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035343834 ... 9.4. Accessing metrics from outside the cluster for custom applications You can query Prometheus metrics from outside the cluster when monitoring your own services with user-defined projects. Access this data from outside the cluster by using the thanos-querier route. This access only supports using a bearer token for authentication. Prerequisites You have deployed your own service, following the "Enabling monitoring for user-defined projects" procedure. You are logged in to an account with the cluster-monitoring-view cluster role, which provides permission to access the Thanos Querier API. You are logged in to an account that has permission to get the Thanos Querier API route. Note If your account does not have permission to get the Thanos Querier API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token to connect to Prometheus by running the following command: USD TOKEN=USD(oc whoami -t) Extract the thanos-querier API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath={.status.ingress[].host}) Set the namespace to the namespace in which your service is running by using the following command: USD NAMESPACE=ns1 Query the metrics of your own services in the command line by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/query?" --data-urlencode "query=up{namespace='USDNAMESPACE'}" The output shows the status for each application pod that Prometheus is scraping: The formatted example output { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "up", "endpoint": "web", "instance": "10.129.0.46:8080", "job": "prometheus-example-app", "namespace": "ns1", "pod": "prometheus-example-app-68d47c4fb6-jztp2", "service": "prometheus-example-app" }, "value": [ 1591881154.748, "1" ] } ], } } Note The formatted example output uses a filtering tool, such as jq , to provide the formatted indented JSON. See the jq Manual (jq documentation) for more information about using jq . The command requests an instant query endpoint of the Thanos Querier service, which evaluates selectors at one point in time. 9.5. Resources reference for the Cluster Monitoring Operator This document describes the following resources deployed and managed by the Cluster Monitoring Operator (CMO): Routes Services Use this information when you want to configure API endpoint connections to retrieve, send, or query metrics data. Important In certain situations, accessing endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations: Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. Do not try to retrieve all metrics data via the /federate endpoint. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. 9.5.1. CMO routes resources 9.5.1.1. openshift-monitoring/alertmanager-main Expose the /api endpoints of the alertmanager-main service via a router. 9.5.1.2. openshift-monitoring/prometheus-k8s Expose the /api endpoints of the prometheus-k8s service via a router. 9.5.1.3. openshift-monitoring/prometheus-k8s-federate Expose the /federate endpoint of the prometheus-k8s service via a router. 9.5.1.4. openshift-user-workload-monitoring/federate Expose the /federate endpoint of the prometheus-user-workload service via a router. 9.5.1.5. openshift-monitoring/thanos-querier Expose the /api endpoints of the thanos-querier service via a router. 9.5.1.6. openshift-user-workload-monitoring/thanos-ruler Expose the /api endpoints of the thanos-ruler service via a router. 9.5.2. CMO services resources 9.5.2.1. openshift-monitoring/prometheus-operator-admission-webhook Expose the admission webhook service which validates PrometheusRules and AlertmanagerConfig custom resources on port 8443. 9.5.2.2. openshift-user-workload-monitoring/alertmanager-user-workload Expose the user-defined Alertmanager web server within the cluster on the following ports: Port 9095 provides access to the Alertmanager endpoints. Granting access requires binding a user to the monitoring-alertmanager-api-reader role (for read-only operations) or monitoring-alertmanager-api-writer role in the openshift-user-workload-monitoring project. Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit cluster role or monitoring-edit cluster role in the project. Port 9097 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. 9.5.2.3. openshift-monitoring/alertmanager-main Expose the Alertmanager web server within the cluster on the following ports: Port 9094 provides access to all the Alertmanager endpoints. Granting access requires binding a user to the monitoring-alertmanager-view (for read-only operations) or monitoring-alertmanager-edit role in the openshift-monitoring project. Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit cluster role or monitoring-edit cluster role in the project. Port 9097 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. 9.5.2.4. openshift-monitoring/kube-state-metrics Expose kube-state-metrics /metrics endpoints within the cluster on the following ports: Port 8443 provides access to the Kubernetes resource metrics. This port is for internal use, and no other usage is guaranteed. Port 9443 provides access to the internal kube-state-metrics metrics. This port is for internal use, and no other usage is guaranteed. 9.5.2.5. openshift-monitoring/metrics-server Expose the metrics-server web server on port 443. This port is for internal use, and no other usage is guaranteed. 9.5.2.6. openshift-monitoring/monitoring-plugin Expose the monitoring plugin service on port 9443. This port is for internal use, and no other usage is guaranteed. 9.5.2.7. openshift-monitoring/node-exporter Expose the /metrics endpoint on port 9100. This port is for internal use, and no other usage is guaranteed. 9.5.2.8. openshift-monitoring/openshift-state-metrics Expose openshift-state-metrics /metrics endpoints within the cluster on the following ports: Port 8443 provides access to the OpenShift resource metrics. This port is for internal use, and no other usage is guaranteed. Port 9443 provides access to the internal openshift-state-metrics metrics. This port is for internal use, and no other usage is guaranteed. 9.5.2.9. openshift-monitoring/prometheus-k8s Expose the Prometheus web server within the cluster on the following ports: Port 9091 provides access to all the Prometheus endpoints. Granting access requires binding a user to the cluster-monitoring-view cluster role. Port 9092 provides access to the /metrics and /federate endpoints only. This port is for internal use, and no other usage is guaranteed. 9.5.2.10. openshift-user-workload-monitoring/prometheus-operator Expose the /metrics endpoint on port 8443. This port is for internal use, and no other usage is guaranteed. 9.5.2.11. openshift-monitoring/prometheus-operator Expose the /metrics endpoint on port 8443. This port is for internal use, and no other usage is guaranteed. 9.5.2.12. openshift-user-workload-monitoring/prometheus-user-workload Expose the Prometheus web server within the cluster on the following ports: Port 9091 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. Port 9092 provides access to the /federate endpoint only. Granting access requires binding a user to the cluster-monitoring-view cluster role. This also exposes the /metrics endpoint of the Thanos sidecar web server on port 10902. This port is for internal use, and no other usage is guaranteed. 9.5.2.13. openshift-monitoring/telemeter-client Expose the /metrics endpoint on port 8443. This port is for internal use, and no other usage is guaranteed. 9.5.2.14. openshift-monitoring/thanos-querier Expose the Thanos Querier web server within the cluster on the following ports: Port 9091 provides access to all the Thanos Querier endpoints. Granting access requires binding a user to the cluster-monitoring-view cluster role. Port 9092 provides access to the /api/v1/query , /api/v1/query_range/ , /api/v1/labels , /api/v1/label/*/values , and /api/v1/series endpoints restricted to a given project. Granting access requires binding a user to the view cluster role in the project. Port 9093 provides access to the /api/v1/alerts , and /api/v1/rules endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit , monitoring-edit , or monitoring-rules-view cluster role in the project. Port 9094 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. 9.5.2.15. openshift-user-workload-monitoring/thanos-ruler Expose the Thanos Ruler web server within the cluster on the following ports: Port 9091 provides access to all Thanos Ruler endpoints. Granting access requires binding a user to the cluster-monitoring-view cluster role. Port 9092 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. This also exposes the gRPC endpoints on port 10901. This port is for internal use, and no other usage is guaranteed. 9.5.2.16. openshift-monitoring/cluster-monitoring-operator Expose the /metrics and /validate-webhook endpoints on port 8443. This port is for internal use, and no other usage is guaranteed. 9.6. Additional resources Configuring remote write storage Managing metrics Managing alerts
[ "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.status.ingress[].host})", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v2/receivers\"", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={.status.ingress[].host})", "curl -G -k -H \"Authorization: Bearer USDTOKEN\" https://USDHOST/federate --data-urlencode 'match[]=up'", "TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath={.status.ingress[].host})", "NAMESPACE=ns1", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\"", "{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"up\", \"endpoint\": \"web\", \"instance\": \"10.129.0.46:8080\", \"job\": \"prometheus-example-app\", \"namespace\": \"ns1\", \"pod\": \"prometheus-example-app-68d47c4fb6-jztp2\", \"service\": \"prometheus-example-app\" }, \"value\": [ 1591881154.748, \"1\" ] } ], } }" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/accessing-third-party-monitoring-apis
Chapter 4. Tuning Satellite Server with predefined profiles
Chapter 4. Tuning Satellite Server with predefined profiles If your Satellite deployment includes more than 5000 hosts, you can use predefined tuning profiles to improve performance of Satellite. Note that you cannot use tuning profiles on Capsules. You can choose one of the profiles depending on the number of hosts your Satellite manages and available hardware resources. The tuning profiles are available in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes directory. When you run the satellite-installer command with the --tuning option, deployment configuration settings are applied to Satellite in the following order: The default tuning profile defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml file The tuning profile that you want to apply to your deployment and is defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ directory Optional: If you have configured a /etc/foreman-installer/custom-hiera.yaml file, Satellite applies these configuration settings. Note that the configuration settings that are defined in the /etc/foreman-installer/custom-hiera.yaml file override the configuration settings that are defined in the tuning profiles. Therefore, before applying a tuning profile, you must compare the configuration settings that are defined in the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml , the tuning profile that you want to apply and your /etc/foreman-installer/custom-hiera.yaml file, and remove any duplicated configuration from the /etc/foreman-installer/custom-hiera.yaml file. default Number of hosts: 0 - 5000 RAM: 20G Number of CPU cores: 4 medium Number of hosts: 5001 - 10000 RAM: 32G Number of CPU cores: 8 large Number of hosts: 10001 - 20000 RAM: 64G Number of CPU cores: 16 extra-large Number of hosts: 20001 - 60000 RAM: 128G Number of CPU cores: 32 extra-extra-large Number of hosts: 60000+ RAM: 256G Number of CPU cores: 48+ Procedure Optional: If you have configured the custom-hiera.yaml file on Satellite Server, back up the /etc/foreman-installer/custom-hiera.yaml file to custom-hiera.original . You can use the backup file to restore the /etc/foreman-installer/custom-hiera.yaml file to its original state if it becomes corrupted: Optional: If you have configured the custom-hiera.yaml file on Satellite Server, review the definitions of the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml and the tuning profile that you want to apply in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ . Compare the configuration entries against the entries in your /etc/foreman-installer/custom-hiera.yaml file and remove any duplicated configuration settings in your /etc/foreman-installer/custom-hiera.yaml file. Enter the satellite-installer command with the --tuning option for the profile that you want to apply. For example, to apply the medium tuning profile settings, enter the following command:
[ "cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original", "satellite-installer --tuning medium" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/tuning-with-predefined-profiles_admin
8.16. Kdump
8.16. Kdump Use this screen to select whether or not to use Kdump on this system. Kdump is a kernel crash dumping mechanism which, in the event of a system crash, captures information that can be invaluable in determining the cause of the crash. Note that if you enable Kdump , you must reserve a certain amount of system memory for it. As a result, less memory is available for your processes. If you do not want to use Kdump on this system, uncheck Enable kdump . Otherwise, set the amount of memory to reserve for Kdump . You can let the installer reserve a reasonable amount automatically, or you can set any amount manually. When your are satisfied with the settings, click Done to save the configuration and return to the screen. Figure 8.37. Kdump Enablement and Configuration
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-kdump-x86
Chapter 88. Dozer Component
Chapter 88. Dozer Component Available as of Camel version 2.15 The dozer: component provides the ability to map between Java beans using the Dozer mapping framework since Camel 2.15.0 . Camel also supports the ability to trigger Dozer mappings as a type converter . The primary differences between using a Dozer endpoint and a Dozer converter are: The ability to manage Dozer mapping configuration on a per-endpoint basis vs. global configuration via the converter registry. A Dozer endpoint can be configured to marshal/unmarshal input and output data using Camel data formats to support a single, any-to-any transformation endpoint The Dozer component allows for fine-grained integration and extension of Dozer to support additional functionality (e.g. mapping literal values, using expressions for mappings, etc.). In order to use the Dozer component, Maven users will need to add the following dependency to their pom.xml : <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dozer</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 88.1. URI format The Dozer component only supports producer endpoints. dozer:endpointId[?options] Where endpointId is a name used to uniquely identify the Dozer endpoint configuration. An example Dozer endpoint URI: from("direct:orderInput"). to("dozer:transformOrder?mappingFile=orderMapping.xml&targetModel=example.XYZOrder"). to("direct:orderOutput"); 88.2. Options The Dozer component has no options. The Dozer endpoint is configured using URI syntax: with the following path and query parameters: 88.2.1. Path Parameters (1 parameters): Name Description Default Type name Required A human readable name of the mapping. String 88.2.2. Query Parameters (7 parameters): Name Description Default Type mappingConfiguration (producer) The name of a DozerBeanMapperConfiguration bean in the Camel registry which should be used for configuring the Dozer mapping. This is an alternative to the mappingFile option that can be used for fine-grained control over how Dozer is configured. Remember to use a # prefix in the value to indicate that the bean is in the Camel registry (e.g. #myDozerConfig). DozerBeanMapper Configuration mappingFile (producer) The location of a Dozer configuration file. The file is loaded from the classpath by default, but you can use file:, classpath:, or http: to load the configuration from a specific location. dozerBeanMapping.xml String marshalId (producer) The id of a dataFormat defined within the Camel Context to use for marshalling the mapping output to a non-Java type. String sourceModel (producer) Fully-qualified class name for the source type used in the mapping. If specified, the input to the mapping is converted to the specified type before being mapped with Dozer. String targetModel (producer) Required Fully-qualified class name for the target type used in the mapping. String unmarshalId (producer) The id of a dataFormat defined within the Camel Context to use for unmarshalling the mapping input from a non-Java type. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 88.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.dozer.enabled Enable dozer component true Boolean camel.component.dozer.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 88.4. Using Data Formats with Dozer Dozer does not support non-Java sources and targets for mappings, so it cannot, for example, map an XML document to a Java object on its own. Luckily, Camel has extensive support for marshalling between Java and a wide variety of formats using data formats . The Dozer component takes advantage of this support by allowing you to specify that input and output data should be passed through a data format prior to processing via Dozer. You can always do this on your own outside the call to Dozer, but supporting it directly in the Dozer component allows you to use a single endpoints to configure any-to-any transformation within Camel. As an example, let's say you wanted to map between an XML data structure and a JSON data structure using the Dozer component. If you had the following data formats defined in a Camel Context: <dataFormats> <json library="Jackson" id="myjson"/> <jaxb contextPath="org.example" id="myjaxb"/> </dataFormats> You could then configure a Dozer endpoint to unmarshal the input XML using a JAXB data format and marshal the mapping output using Jackson. <endpoint uri="dozer:xml2json?marshalId=myjson&amp;unmarshalId=myjaxb&amp;targetModel=org.example.Order"/> 88.5. Configuring Dozer All Dozer endpoints require a Dozer mapping configuration file which defines mappings between source and target objects. The component will default to a location of META-INF/dozerBeanMapping.xml if the mappingFile or mappingConfiguration options are not specified on an endpoint. If you need to supply multiple mapping configuration files for a single endpoint or specify additional configuration options (e.g. event listeners, custom converters, etc.), then you can use an instance of org.apache.camel.converter.dozer.DozerBeanMapperConfiguration . <bean id="mapper" class="org.apache.camel.converter.dozer.DozerBeanMapperConfiguration"> <property name="mappingFiles"> <list> <value>mapping1.xml</value> <value>mapping2.xml</value> </list> </property> </bean> 88.6. Mapping Extensions The Dozer component implements a number of extensions to the Dozer mapping framework as custom converters. These converters implement mapping functions that are not supported directly by Dozer itself. 88.6.1. Variable Mappings Variable mappings allow you to map the value of a variable definition within a Dozer configuration into a target field instead of using the value of a source field. This is equivalent to constant mapping in other mapping frameworks, where can you assign a literal value to a target field. To use a variable mapping, simply define a variable within your mapping configuration and then map from the VariableMapper class into your target field of choice: <mappings xmlns="http://dozermapper.github.io/schema/bean-mapping" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://dozermapper.github.io/schema/bean-mapping http://dozermapper.github.io/schema/bean-mapping.xsd"> <configuration> <variables> <variable name="CUST_ID">ACME-SALES</variable> </variables> </configuration> <mapping> <class-a>org.apache.camel.component.dozer.VariableMapper</class-a> <class-b>org.example.Order</class-b> <field custom-converter-id="_variableMapping" custom-converter-param="USD{CUST_ID}"> <a>literal</a> <b>custId</b> </field> </mapping> </mappings> 88.6.2. Custom Mappings Custom mappings allow you to define your own logic for how a source field is mapped to a target field. They are similar in function to Dozer customer converters, with two notable differences: You can have multiple converter methods in a single class with custom mappings. There is no requirement to implement a Dozer-specific interface with custom mappings. A custom mapping is declared by using the built-in '_customMapping' converter in your mapping configuration. The parameter to this converter has the following syntax: [class-name][,method-name] Method name is optional - the Dozer component will search for a method that matches the input and output types required for a mapping. An example custom mapping and configuration are provided below. public class CustomMapper { // All customer ids must be wrapped in "[ ]" public Object mapCustomer(String customerId) { return "[" + customerId + "]"; } } <mappings xmlns="http://dozermapper.github.io/schema/bean-mapping" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://dozermapper.github.io/schema/bean-mapping http://dozermapper.github.io/schema/bean-mapping.xsd"> <mapping> <class-a>org.example.A</class-a> <class-b>org.example.B</class-b> <field custom-converter-id="_customMapping" custom-converter-param="org.example.CustomMapper,mapCustomer"> <a>header.customerNum</a> <b>custId</b> </field> </mapping> </mappings> 88.6.3. Expression Mappings Expression mappings allow you to use the powerful language capabilities of Camel to evaluate an expression and assign the result to a target field in a mapping. Any language that Camel supports can be used in an expression mapping. Basic examples of expressions include the ability to map a Camel message header or exchange property to a target field or to concatenate multiple source fields into a target field. The syntax of a mapping expression is: [language]:[expression] An example of mapping a message header into a target field: <mappings xmlns="http://dozermapper.github.io/schema/bean-mapping" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://dozermapper.github.io/schema/bean-mapping http://dozermapper.github.io/schema/bean-mapping.xsd"> <mapping> <class-a>org.apache.camel.component.dozer.ExpressionMapper</class-a> <class-b>org.example.B</class-b> <field custom-converter-id="_expressionMapping" custom-converter-param="simple:\USD{header.customerNumber}"> <a>expression</a> <b>custId</b> </field> </mapping> </mappings> Note that any properties within your expression must be escaped with "\" to prevent an error when Dozer attempts to resolve variable values defined using the EL.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dozer</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "dozer:endpointId[?options]", "from(\"direct:orderInput\"). to(\"dozer:transformOrder?mappingFile=orderMapping.xml&targetModel=example.XYZOrder\"). to(\"direct:orderOutput\");", "dozer:name", "<dataFormats> <json library=\"Jackson\" id=\"myjson\"/> <jaxb contextPath=\"org.example\" id=\"myjaxb\"/> </dataFormats>", "<endpoint uri=\"dozer:xml2json?marshalId=myjson&amp;unmarshalId=myjaxb&amp;targetModel=org.example.Order\"/>", "<bean id=\"mapper\" class=\"org.apache.camel.converter.dozer.DozerBeanMapperConfiguration\"> <property name=\"mappingFiles\"> <list> <value>mapping1.xml</value> <value>mapping2.xml</value> </list> </property> </bean>", "<mappings xmlns=\"http://dozermapper.github.io/schema/bean-mapping\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://dozermapper.github.io/schema/bean-mapping http://dozermapper.github.io/schema/bean-mapping.xsd\"> <configuration> <variables> <variable name=\"CUST_ID\">ACME-SALES</variable> </variables> </configuration> <mapping> <class-a>org.apache.camel.component.dozer.VariableMapper</class-a> <class-b>org.example.Order</class-b> <field custom-converter-id=\"_variableMapping\" custom-converter-param=\"USD{CUST_ID}\"> <a>literal</a> <b>custId</b> </field> </mapping> </mappings>", "[class-name][,method-name]", "public class CustomMapper { // All customer ids must be wrapped in \"[ ]\" public Object mapCustomer(String customerId) { return \"[\" + customerId + \"]\"; } }", "<mappings xmlns=\"http://dozermapper.github.io/schema/bean-mapping\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://dozermapper.github.io/schema/bean-mapping http://dozermapper.github.io/schema/bean-mapping.xsd\"> <mapping> <class-a>org.example.A</class-a> <class-b>org.example.B</class-b> <field custom-converter-id=\"_customMapping\" custom-converter-param=\"org.example.CustomMapper,mapCustomer\"> <a>header.customerNum</a> <b>custId</b> </field> </mapping> </mappings>", "[language]:[expression]", "<mappings xmlns=\"http://dozermapper.github.io/schema/bean-mapping\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://dozermapper.github.io/schema/bean-mapping http://dozermapper.github.io/schema/bean-mapping.xsd\"> <mapping> <class-a>org.apache.camel.component.dozer.ExpressionMapper</class-a> <class-b>org.example.B</class-b> <field custom-converter-id=\"_expressionMapping\" custom-converter-param=\"simple:\\USD{header.customerNumber}\"> <a>expression</a> <b>custId</b> </field> </mapping> </mappings>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/dozer-component
Chapter 4. OpenShift Data Foundation installation overview
Chapter 4. OpenShift Data Foundation installation overview OpenShift Data Foundation consists of multiple components managed by multiple operators. 4.1. Installed Operators When you install OpenShift Data Foundation from the Operator Hub, the following four separate Deployments are created: odf-operator : Defines the odf-operator Pod ocs-operator : Defines the ocs-operator Pod which runs processes for ocs-operator and its metrics-exporter in the same container. rook-ceph-operator : Defines the rook-ceph-operator Pod. mcg-operator : Defines the mcg-operator Pod. These operators run independently and interact with each other by creating customer resources (CRs) watched by the other operators. The ocs-operator is primarily responsible for creating the CRs to configure Ceph storage and Multicloud Object Gateway. The mcg-operator sometimes creates Ceph volumes for use by its components. 4.2. OpenShift Container Storage initialization The OpenShift Data Foundation bundle also defines an external plugin to the OpenShift Container Platform Console, adding new screens and functionality not otherwise available in the Console. This plugin runs as a web server in the odf-console-plugin Pod, which is managed by a Deployment created by the OLM at the time of installation. The ocs-operator automatically creates an OCSInitialization CR after it gets created. Only one OCSInitialization CR exists at any point in time. It controls the ocs-operator behaviors that are not restricted to the scope of a single StorageCluster , but only performs them once. When you delete the OCSInitialization CR, the ocs-operator creates it again and this allows you to re-trigger its initialization operations. The OCSInitialization CR controls the following behaviors: SecurityContextConstraints (SCCs) After the OCSInitialization CR is created, the ocs-operator creates various SCCs for use by the component Pods. Ceph Toolbox Deployment You can use the OCSInitialization to deploy the Ceph Toolbox Pod for the advanced Ceph operations. Rook-Ceph Operator Configuration This configuration creates the rook-ceph-operator-config ConfigMap that governs the overall configuration for rook-ceph-operator behavior. 4.3. Storage cluster creation The OpenShift Data Foundation operators themselves provide no storage functionality, and the desired storage configuration must be defined. After you install the operators, create a new StorageCluster , using either the OpenShift Container Platform console wizard or the CLI and the ocs-operator reconciles this StorageCluster . OpenShift Data Foundation supports a single StorageCluster per installation. Any StorageCluster CRs created after the first one is ignored by ocs-operator reconciliation. OpenShift Data Foundation allows the following StorageCluster configurations: Internal In the Internal mode, all the components run containerized within the OpenShift Container Platform cluster and uses dynamically provisioned persistent volumes (PVs) created against the StorageClass specified by the administrator in the installation wizard. Internal-attached This mode is similar to the Internal mode but the administrator is required to define the local storage devices directly attached to the cluster nodes that the Ceph uses for its backing storage. Also, the administrator need to create the CRs that the local storage operator reconciles to provide the StorageClass . The ocs-operator uses this StorageClass as the backing storage for Ceph. External In this mode, Ceph components do not run inside the OpenShift Container Platform cluster instead connectivity is provided to an external OpenShift Container Storage installation for which the applications can create PVs. The other components run within the cluster as required. MCG Standalone This mode facilitates the installation of a Multicloud Object Gateway system without an accompanying CephCluster. After a StorageCluster CR is found, ocs-operator validates it and begins to create subsequent resources to define the storage components. 4.3.1. Internal mode storage cluster Both internal and internal-attached storage clusters have the same setup process as follows: StorageClasses Create the storage classes that cluster applications use to create Ceph volumes. SnapshotClasses Create the volume snapshot classes that the cluster applications use to create snapshots of Ceph volumes. Ceph RGW configuration Create various Ceph object CRs to enable and provide access to the Ceph RGW object storage endpoint. Ceph RBD Configuration Create the CephBlockPool CR to enable RBD storage. CephFS Configuration Create the CephFilesystem CR to enable CephFS storage. Rook-Ceph Configuration Create the rook-config-override ConfigMap that governs the overall behavior of the underlying Ceph cluster. CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator . For more information, see Rook-Ceph operator . NoobaaSystem Create the NooBaa CR to trigger reconciliation from mcg-operator . For more information, see MCG operator . Job templates Create OpenShift Template CRs that define Jobs to run administrative operations for OpenShift Container Storage. Quickstarts Create the QuickStart CRs that display the quickstart guides in the Web Console. 4.3.1.1. Cluster Creation After the ocs-operator creates the CephCluster CR, the rook-operator creates the Ceph cluster according to the desired configuration. The rook-operator configures the following components: Ceph mon daemons Three Ceph mon daemons are started on different nodes in the cluster. They manage the core metadata for the Ceph cluster and they must form a majority quorum. The metadata for each mon is backed either by a PV if it is in a cloud environment or a path on the local host if it is in a local storage device environment. Ceph mgr daemon This daemon is started and it gathers metrics for the cluster and report them to Prometheus. Ceph OSDs These OSDs are created according to the configuration of the storageClassDeviceSets . Each OSD consumes a PV that stores the user data. By default, Ceph maintains three replicas of the application data across different OSDs for high durability and availability using the CRUSH algorithm. CSI provisioners These provisioners are started for RBD and CephFS . When volumes are requested for the storage classes of OpenShift Container Storage, the requests are directed to the Ceph-CSI driver to provision the volumes in Ceph. CSI volume plugins and CephFS The CSI volume plugins for RBD and CephFS are started on each node in the cluster. The volume plugin needs to be running wherever the Ceph volumes are required to be mounted by the applications. After the CephCluster CR is configured, Rook reconciles the remaining Ceph CRs to complete the setup: CephBlockPool The CephBlockPool CR provides the configuration for Rook operator to create Ceph pools for RWO volumes. CephFilesystem The CephFilesystem CR instructs the Rook operator to configure a shared file system with CephFS, typically for RWX volumes. The CephFS metadata server (MDS) is started to manage the shared volumes. CephObjectStore The CephObjectStore CR instructs the Rook operator to configure an object store with the RGW service CephObjectStoreUser CR The CephObjectStoreUser CR instructs the Rook operator to configure an object store user for NooBaa to consume, publishing access/private key as well as the CephObjectStore endpoint. The operator monitors the Ceph health to ensure that storage platform remains healthy. If a mon daemon goes down for too long a period (10 minutes), Rook starts a new mon in its place so that the full quorum can be fully restored. When the ocs-operator updates the CephCluster CR, Rook immediately responds to the requested changes to update the cluster configuration. 4.3.1.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.2. External mode storage cluster For external storage clusters, ocs-operator follows a slightly different setup process. The ocs-operator looks for the existence of the rook-ceph-external-cluster-details ConfigMap , which must be created by someone else, either the administrator or the Console. For information about how to create the ConfigMap , see Creating an OpenShift Data Foundation Cluster for external mode . The ocs-operator then creates some or all of the following resources, as specified in the ConfigMap : External Ceph Configuration A ConfigMap that specifies the endpoints of the external mons . External Ceph Credentials Secret A Secret that contains the credentials to connect to the external Ceph instance. External Ceph StorageClasses One or more StorageClasses to enable the creation of volumes for RBD, CephFS, and/or RGW. Enable CephFS CSI Driver If a CephFS StorageClass is specified, configure rook-ceph-operator to deploy the CephFS CSI Pods. Ceph RGW Configuration If an RGW StorageClass is specified, create various Ceph Object CRs to enable and provide access to the Ceph RGW object storage endpoint. After creating the resources specified in the ConfigMap , the StorageCluster creation process proceeds as follows: CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator (see subsequent sections). SnapshotClasses Create the SnapshotClasses that applications use to create snapshots of Ceph volumes. NoobaaSystem Create the NooBaa CR to trigger reconciliation from noobaa-operator (see subsequent sections). QuickStarts Create the Quickstart CRs that display the quickstart guides in the Console. 4.3.2.1. Cluster Creation The Rook operator performs the following operations when the CephCluster CR is created in external mode: The operator validates that a connection is available to the remote Ceph cluster. The connection requires mon endpoints and secrets to be imported into the local cluster. The CSI driver is configured with the remote connection to Ceph. The RBD and CephFS provisioners and volume plugins are started similarly to the CSI driver when configured in internal mode, the connection to Ceph happens to be external to the OpenShift cluster. Periodically watch for monitor address changes and update the Ceph-CSI configuration accordingly. 4.3.2.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3. MCG Standalone StorageCluster In this mode, no CephCluster is created. Instead a NooBaa system CR is created using default values to take advantage of pre-existing StorageClasses in the OpenShift Container Platform. dashboards. 4.3.3.1. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3.2. StorageSystem Creation As a part of the StorageCluster creation, odf-operator automatically creates a corresponding StorageSystem CR, which exposes the StorageCluster to the OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_installation_overview
Chapter 1. Configuring Jenkins images
Chapter 1. Configuring Jenkins images OpenShift Dedicated provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery. The image is based on the Red Hat Universal Base Images (UBI). OpenShift Dedicated follows the LTS release of Jenkins. OpenShift Dedicated provides an image that contains Jenkins 2.x. The OpenShift Dedicated Jenkins images are available on Quay.io or registry.redhat.io . For example: USD podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> To use these images, you can either access them directly from these registries or push them into your OpenShift Dedicated container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Dedicated resources can then reference the image stream. But for convenience, OpenShift Dedicated provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Dedicated integration with Jenkins. 1.1. Configuration and customization You can manage Jenkins authentication in two ways: OpenShift Dedicated OAuth authentication provided by the OpenShift Dedicated Login plugin. Standard authentication provided by Jenkins. 1.1.1. OpenShift Dedicated OAuth authentication OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false . This activates the OpenShift Dedicated Login plugin, which retrieves the configuration information from pod data or by interacting with the OpenShift Dedicated API server. Valid credentials are controlled by the OpenShift Dedicated identity provider. Jenkins supports both browser and non-browser access. Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Dedicated roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin , edit , and view . The login plugin executes self-SAR requests against those roles in the project or namespace that Jenkins is running in. Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions. The default OpenShift Dedicated admin , edit , and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable. When running Jenkins in an OpenShift Dedicated pod, the login plugin looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in. If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically: The login plugin treats the key and value pairs in the config map as Jenkins permission to OpenShift Dedicated role mappings. The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character. If you want to add the Overall Jenkins Administer permission to an OpenShift Dedicated role, the key should be Overall-Administer . To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide. The value of the key and value pair is the list of OpenShift Dedicated roles the permission should apply to, with each role separated by a comma. If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins . Note The admin user that is pre-populated in the OpenShift Dedicated Jenkins image with administrative privileges is not given those privileges when OpenShift Dedicated OAuth is used. To grant these permissions the OpenShift Dedicated cluster administrator must explicitly define that user in the OpenShift Dedicated identity provider and assign the admin role to the user. Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Dedicated Login plugin polls the OpenShift Dedicated API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Dedicated. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the time the plugin polls OpenShift Dedicated. You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes. The easiest way to create a new Jenkins service using OAuth authentication is to use a template. 1.1.2. Jenkins authentication Jenkins authentication is used by default if the image is run directly, without using a template. The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password . Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication. Procedure Create a Jenkins application that uses standard Jenkins authentication by entering the following command: USD oc new-app -e \ JENKINS_PASSWORD=<password> \ ocp-tools-4/jenkins-rhel8 1.2. Jenkins environment variables The Jenkins server can be configured with the following environment variables: Variable Definition Example values and settings OPENSHIFT_ENABLE_OAUTH Determines whether the OpenShift Dedicated Login plugin manages authentication when logging in to Jenkins. To enable, set to true . Default: false JENKINS_PASSWORD The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true . Default: password JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value . JENKINS_OPTS Specifies arguments to Jenkins. INSTALL_PLUGINS Specifies additional Jenkins plugins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true . Plugins are specified as a comma-delimited list of name:version pairs. Example setting: git:3.7.0,subversion:2.10.2 . OPENSHIFT_PERMISSIONS_POLL_INTERVAL Specifies the interval in milliseconds that the OpenShift Dedicated Login plugin polls OpenShift Dedicated for the permissions that are associated with each user that is defined in Jenkins. Default: 300000 - 5 minutes OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG When running this image with an OpenShift Dedicated persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates the configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true . Default: false OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS When running this image with an OpenShift Dedicated PV for the Jenkins configuration directory, the transfer of plugins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, the plugins are not copied over unless you set this environment variable to true . Default: false ENABLE_FATAL_ERROR_LOG_FILE When running this image with an OpenShift Dedicated PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs . Default: false AGENT_BASE_IMAGE Setting this value overrides the image used for the jnlp container in the sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the jenkins-agent-base-rhel8:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest JAVA_BUILDER_IMAGE Setting this value overrides the image used for the java-builder container in the java-builder sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the java:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/java:latest JAVA_FIPS_OPTIONS Setting this value controls how the JVM operates when running on a FIPS node. For more information, see Configure Red Hat build of OpenJDK 11 in FIPS mode . Default: -Dcom.redhat.fips=false 1.3. Providing Jenkins cross project access If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project. Procedure Identify the secret for the service account that has appropriate permissions to access the project that Jenkins must access by entering the following command: USD oc describe serviceaccount jenkins Example output Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp In this case the secret is named jenkins-token-uyswp . Retrieve the token from the secret by entering the following command: USD oc describe secret <secret name from above> Example output Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA The token parameter contains the token value Jenkins requires to access the project. 1.4. Jenkins cross volume mount points The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration: /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions. 1.5. Customizing the Jenkins image through source-to-image To customize the official OpenShift Dedicated Jenkins image, you can use the image as a source-to-image (S2I) builder. You can use S2I to copy your custom Jenkins jobs definitions, add additional plugins, or replace the provided config.xml file with your own, custom, configuration. To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure: plugins This directory contains those binary Jenkins plugins you want to copy into Jenkins. plugins.txt This file lists the plugins you want to install using the following syntax: configuration/jobs This directory contains the Jenkins job definitions. configuration/config.xml This file contains your custom Jenkins configuration. The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml , there. Sample build configuration to customize the Jenkins image in OpenShift Dedicated apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest 1 The source parameter defines the source Git repository with the layout described above. 2 The strategy parameter defines the original Jenkins image to use as a source image for the build. 3 The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image. 1.6. Configuring the Jenkins Kubernetes plugin The OpenShift Jenkins image includes the preinstalled Kubernetes plugin for Jenkins so that Jenkins agents can be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Dedicated. To use the Kubernetes plugin, OpenShift Dedicated provides an OpenShift Agent Base image that is suitable for use as a Jenkins agent. Important OpenShift Dedicated 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Dedicated lifecycle. Previously, these images were in the OpenShift Dedicated install payload and the openshift4 repository at registry.redhat.io . The OpenShift Jenkins Maven and NodeJS Agent images were removed from the OpenShift Dedicated 4.11 payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Dedicated lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. The Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Dedicated Jenkins image configuration for the Kubernetes plugin. That configuration includes labels for each image that you can apply to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Dedicated pod running the respective agent image. Important In OpenShift Dedicated 4.10 and later, the recommended pattern for running Jenkins agents using the Kubernetes plugin is to use pod templates with both jnlp and sidecar containers. The jnlp container uses the OpenShift Dedicated Jenkins Base agent image to facilitate launching a separate pod for your build. The sidecar container image has the tools needed to build in a particular language within the separate pod that was launched. Many container images from the Red Hat Container Catalog are referenced in the sample image streams in the openshift namespace. The OpenShift Dedicated Jenkins image has a pod template named java-build with sidecar containers that demonstrate this approach. This pod template uses the latest Java version provided by the java image stream in the openshift namespace. The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plugin. With the OpenShift Dedicated sync plugin, on Jenkins startup, the Jenkins image searches within the project it is running, or the projects listed in the plugin's configuration, for the following items: Image streams with the role label set to jenkins-agent . Image stream tags with the role annotation set to jenkins-agent . Config maps with the role label set to jenkins-agent . When the Jenkins image finds an image stream with the appropriate label, or an image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plugin configuration. This way, you can assign your Jenkins jobs to run in a pod running the container image provided by the image stream. The name and image references of the image stream, or image stream tag, are mapped to the name and image fields in the Kubernetes plugin pod template. You can control the label field of the Kubernetes plugin pod template by setting an annotation on the image stream, or image stream tag object, with the key agent-label . Otherwise, the name is used as the label. Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Dedicated Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. When it finds a config map with the appropriate label, the Jenkins image assumes that any values in the key-value data payload of the config map contain Extensible Markup Language (XML) consistent with the configuration format for Jenkins and the Kubernetes plugin pod templates. One key advantage of config maps over image streams and image stream tags is that you can control all the Kubernetes plugin pod template parameters. Sample config map for jenkins-agent kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Dedicated Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Dedicated Sync plugin monitors the API server of OpenShift Dedicated for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag deletes any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. If you create appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or add labels after their initial creation, this results in the creation of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration. The changes also override any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . Additional resources Important changes to OpenShift Jenkins images 1.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Dedicated service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Dedicated master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Dedicated Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Dedicated, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Dedicated sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Dedicated to manipulate whatever projects you choose to manipulate from the within the pod. 1.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Dedicated provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Dedicated deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 1.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Important OpenShift Dedicated 4.11 removed the OpenShift Jenkins Maven and NodeJS Agent images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Dedicated lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Dedicated. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Dedicated use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Dedicated Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } Additional resources Important changes to OpenShift Jenkins images 1.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 1.11. Additional resources Important changes to OpenShift Jenkins images
[ "podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8", "oc describe serviceaccount jenkins", "Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp", "oc describe secret <secret name from above>", "Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA", "pluginId:pluginVersion", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "oc describe jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/jenkins/images-other-jenkins
Chapter 6. Creating Windows machine sets
Chapter 6. Creating Windows machine sets 6.1. Creating a Windows machine set on AWS You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. Use one of the following aws commands, as appropriate for your Windows Server release, to query valid AMI images: Example Windows Server 2022 command USD aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2022*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table Example Windows Server 2019 command USD aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2019*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table where: <aws_region_name> Specifies the name of your AWS region. For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 6.1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.1.2. Sample YAML for a Windows MachineSet object on AWS This sample YAML defines a Windows MachineSet object running on Amazon Web Services (AWS) that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api 1 3 5 10 13 14 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the AMI ID of a supported Windows image with a container runtime installed. Note For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 11 Specify the AWS zone, like us-east-1a . 12 Specify the AWS region, like us-east-1 . 16 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 6.1.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.1.4. Additional resources Overview of machine management 6.2. Creating a Windows machine set on Azure You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.2.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.2.2. Sample YAML for a Windows MachineSet object on Azure This sample YAML defines a Windows MachineSet object running on Microsoft Azure that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: "" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: "<zone>" 16 1 3 5 11 12 13 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows compute machine set name. Windows machine names on Azure cannot be more than 15 characters long. Therefore, the compute machine set name cannot be more than 9 characters long, due to the way machine names are generated from it. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify a WindowsServer image offering that defines the 2019-Datacenter-with-Containers SKU. 10 Specify the Azure region, like centralus . 14 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 16 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 6.2.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.2.4. Additional resources Overview of machine management 6.3. Creating a Windows machine set on GCP You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.3.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.3.2. Sample YAML for a Windows MachineSet object on GCP This sample YAML file defines a Windows MachineSet object running on Google Cloud Platform (GCP) that the Windows Machine Config Operator (WMCO) can use. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14 1 3 5 10 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone suffix (such as a ). 7 Configure the machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the full path to an image of a supported version of Windows Server. 11 Specify the GCP project that this cluster was created in. 12 Specify the GCP region, such as us-central1 . 13 Created by the WMCO when it configures the first Windows machine. After that, the windows-user-data is available for all subsequent machine sets to consume. 14 Specify the zone within the chosen region, such as us-central1-a . 6.3.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.3.4. Additional resources Overview of machine management 6.4. Creating a Windows MachineSet object on Nutanix You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. You added a new DNS entry for the internal API server URL, api-int.<cluster_name>.<base_domain> , that points to the external API server URL, api.<cluster_name>.<base_domain> . This can be a CNAME or an additional A record. 6.4.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.4.2. Sample YAML for a Windows MachineSet object on Nutanix This sample YAML defines a Windows MachineSet object running on Nutanix that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 9 categories: null cluster: 10 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 11 image: 12 name: <image_id> type: name kind: NutanixMachineProviderConfig 13 memorySize: 16Gi 14 project: type: "" subnets: 15 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 16 userDataSecret: name: windows-user-data 17 vcpuSockets: 4 18 vcpusPerSocket: 1 19 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.15. 10 Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 11 Specifies the secret name for the cluster. Do not change this value. 12 Specifies the image to use. Use an image from an existing default compute machine set for the cluster. 13 Specifies the cloud provider platform type. Do not change this value. 14 Specifies the amount of memory for the cluster in Gi. 15 Specifies a subnet configuration. In this example, the subnet type is uuid , so there is a uuid stanza. 16 Specifies the size of the system disk in Gi. 17 Specifies the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 18 Specifies the number of vCPU sockets. 19 Specifies the number of vCPUs per socket. 6.4.3. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.4.4. Additional resources Overview of machine management . 6.5. Creating a Windows machine set on vSphere You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image. 6.5.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 6.5.2. Preparing your vSphere environment for Windows container workloads You must prepare your vSphere environment for Windows container workloads by creating the vSphere Windows VM golden image and enabling communication with the internal API server for the WMCO. 6.5.2.1. Creating the vSphere Windows VM golden image Create a vSphere Windows virtual machine (VM) golden image. Prerequisites You have created a private/public key pair, which is used to configure key-based authentication in the OpenSSH server. The private key must also be configured in the Windows Machine Config Operator (WMCO) namespace. This is required to allow the WMCO to communicate with the Windows VM. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. Note You must use Microsoft PowerShell commands in several cases when creating your Windows VM. PowerShell commands in this guide are distinguished by the PS C:\> prefix. Procedure Select a compatible Windows Server version. Currently, the Windows Machine Config Operator (WMCO) stable version supports Windows Server 2022 Long-Term Servicing Channel with the OS-level container networking patch KB5012637 . Create a new VM in the vSphere client using the VM golden image with a compatible Windows Server version. For more information about compatible versions, see the "Windows Machine Config Operator prerequisites" section of the "Red Hat OpenShift support for Windows Containers release notes." Important The virtual hardware version for your VM must meet the infrastructure requirements for OpenShift Container Platform. For more information, see the "VMware vSphere infrastructure requirements" section in the OpenShift Container Platform documentation. Also, you can refer to VMware's documentation on virtual machine hardware versions . Install and configure VMware Tools version 11.0.6 or greater on the Windows VM. See the VMware Tools documentation for more information. After installing VMware Tools on the Windows VM, verify the following: The C:\ProgramData\VMware\VMware Tools\tools.conf file exists with the following entry: exclude-nics= If the tools.conf file does not exist, create it with the exclude-nics option uncommented and set as an empty value. This entry ensures the cloned vNIC generated on the Windows VM by the hybrid-overlay is not ignored. The Windows VM has a valid IP address in vCenter: C:\> ipconfig The VMTools Windows service is running: PS C:\> Get-Service -Name VMTools | Select Status, StartType Install and configure the OpenSSH Server on the Windows VM. See Microsoft's documentation on installing OpenSSH for more details. Set up SSH access for an administrative user. See Microsoft's documentation on the Administrative user to do this. Important The public key used in the instructions must correspond to the private key you create later in the WMCO namespace that holds your secret. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. You must create a new firewall rule in the Windows VM that allows incoming connections for container logs. Run the following PowerShell command to create the firewall rule on TCP port 10250: PS C:\> New-NetFirewallRule -DisplayName "ContainerLogsPort" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow Clone the Windows VM so it is a reusable image. Follow the VMware documentation on how to clone an existing virtual machine for more details. In the cloned Windows VM, run the Windows Sysprep tool : C:\> C:\Windows\System32\Sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1 1 Specify the path to your unattend.xml file. Note There is a limit on how many times you can run the sysprep command on a Windows image. Consult Microsoft's documentation for more information. An example unattend.xml is provided, which maintains all the changes needed for the WMCO. You must modify this example; it cannot be used directly. Example 6.1. Example unattend.xml <?xml version="1.0" encoding="UTF-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="specialize"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Security-SPP-UX" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-SQMApi" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass="oobeSystem"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend> 1 Specify the ComputerName , which must follow the Kubernetes' names specification . These specifications also apply to Guest OS customization performed on the resulting template while creating new VMs. 2 Disable the automatic logon to avoid the security issue of leaving an open terminal with Administrator privileges at boot. This is the default value and must not be changed. 3 Replace the MyPassword placeholder with the password for the Administrator account. This prevents the built-in Administrator account from having a blank password by default. Follow Microsoft's best practices for choosing a password . After the Sysprep tool has completed, the Windows VM will power off. You must not use or power on this VM anymore. Convert the Windows VM to a template in vCenter . 6.5.2.1.1. Additional resources Configuring a secret for the Windows Machine Config Operator VMware vSphere infrastructure requirements 6.5.2.2. Enabling communication with the internal API server for the WMCO on vSphere The Windows Machine Config Operator (WMCO) downloads the Ignition config files from the internal API server endpoint. You must enable communication with the internal API server so that your Windows virtual machine (VM) can download the Ignition config files, and the kubelet on the configured VM can only communicate with the internal API server. Prerequisites You have installed a cluster on vSphere. Procedure Add a new DNS entry for api-int.<cluster_name>.<base_domain> that points to the external API server URL api.<cluster_name>.<base_domain> . This can be a CNAME or an additional A record. Note The external API endpoint was already created as part of the initial cluster installation on vSphere. 6.5.3. Sample YAML for a Windows MachineSet object on vSphere This sample YAML defines a Windows MachineSet object running on VMware vSphere that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows compute machine set name. The compute machine set name cannot be more than 9 characters long, due to the way machine names are generated in vSphere. 7 Configure the compute machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the size of the vSphere Virtual Machine Disk (VMDK). Note This parameter does not set the size of the Windows partition. You can resize the Windows partition by using the unattend.xml file or by creating the vSphere Windows virtual machine (VM) golden image with the required disk size. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other Linux compute machines reside in the cluster. 11 Specify the full path of the Windows vSphere VM template to use, such as golden-images/windows-server-template . The name must be unique. Important Do not specify the original VM template. The VM template must remain off and must be cloned for new Windows machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. 12 The windows-user-data is created by the WMCO when the first Windows machine is configured. After that, the windows-user-data is available for all subsequent compute machine sets to consume. 13 Specify the vCenter Datacenter to deploy the compute machine set on. 14 Specify the vCenter Datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Optional: Specify the vSphere resource pool for your Windows VMs. 17 Specify the vCenter server IP or fully qualified domain name. 6.5.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.5.5. Additional resources Overview of machine management
[ "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2022*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2019*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: \"\" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: \"<zone>\" 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 9 categories: null cluster: 10 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 11 image: 12 name: <image_id> type: name kind: NutanixMachineProviderConfig 13 memorySize: 16Gi 14 project: type: \"\" subnets: 15 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 16 userDataSecret: name: windows-user-data 17 vcpuSockets: 4 18 vcpusPerSocket: 1 19", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "exclude-nics=", "C:\\> ipconfig", "PS C:\\> Get-Service -Name VMTools | Select Status, StartType", "PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow", "C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/creating-windows-machine-sets
2.6.4.2. The /etc/xinetd.d/ Directory
2.6.4.2. The /etc/xinetd.d/ Directory The /etc/xinetd.d/ directory contains the configuration files for each service managed by xinetd and the names of the files are correlated to the service. As with xinetd.conf , this directory is read only when the xinetd service is started. For any changes to take effect, the administrator must restart the xinetd service. The format of files in the /etc/xinetd.d/ directory use the same conventions as /etc/xinetd.conf . The primary reason the configuration for each service is stored in a separate file is to make customization easier and less likely to affect other services. To gain an understanding of how these files are structured, consider the /etc/xinetd.d/krb5-telnet file: These lines control various aspects of the telnet service: service - Specifies the service name, usually one of those listed in the /etc/services file. flags - Sets any of a number of attributes for the connection. REUSE instructs xinetd to reuse the socket for a Telnet connection. Note The REUSE flag is deprecated. All services now implicitly use the REUSE flag. socket_type - Sets the network socket type to stream . wait - Specifies whether the service is single-threaded ( yes ) or multi-threaded ( no ). user - Specifies which user ID the process runs under. server - Specifies which binary executable to launch. log_on_failure - Specifies logging parameters for log_on_failure in addition to those already defined in xinetd.conf . disable - Specifies whether the service is disabled ( yes ) or enabled ( no ). Refer to the xinetd.conf man page for more information about these options and their usage.
[ "service telnet { flags = REUSE socket_type = stream wait = no user = root server = /usr/kerberos/sbin/telnetd log_on_failure += USERID disable = yes }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-xinetd_configuration_files-the_etcxinetd.d_directory
1.3. What Is GNOME Classic?
1.3. What Is GNOME Classic? GNOME Classic is a GNOME Shell feature and mode for users who prefer a more traditional desktop experience. While GNOME Classic is based on GNOME 3 technologies, it provides a number of changes to the user interface: The Applications and Places menus. The Applications menu is displayed at the top left of the screen. It gives the user access to applications organized into categories. The user can also open the Activities Overview from that menu. The Places menu is displayed to the Applications menu on the top bar . It gives the user quick access to important folders, for example Downloads or Pictures . The taskbar. The taskbar is displayed at the bottom of the screen, and features: a window list, a notification icon displayed to the window list, a short identifier for the current workspace and total number of available workspaces displayed to the notification icon. Four available workspaces. In GNOME Classic, the number of workspaces available to the user is by default set to 4. Minimize and maximize buttons. Window titlebars in GNOME Classic feature the minimize and maximize buttons that let the user quickly minimize the windows to the window list, or maximize them to take up all of the space on the desktop. A traditional Super + Tab window switcher. In GNOME Classic, windows in the Super + Tab window switcher are not grouped by application. The system menu. The system menu is in the top right corner. You can update some of your settings, find information about your Wi-Fi connection, switch user, log out, and turn off your computer from this menu. Figure 1.2. GNOME Classic with the Calculator application and the Accessories submenu of the Applications menu 1.3.1. The GNOME Classic Extensions GNOME Classic is distributed as a set of GNOME Shell extensions . The GNOME Classic extensions are installed as dependencies of the gnome-classic-session package, which provides components required to run a GNOME Classic session. Because the GNOME Classic extensions are enabled by default on Red Hat Enterprise Linux 7, GNOME Classic is the default Red Hat Enterprise Linux 7 desktop user interface. AlternateTab ( [email protected] ), Applications Menu ( [email protected] ), Launch new instance ( [email protected] ), Places Status Indicator ( [email protected] ), Window List ( [email protected] ). 1.3.2. Switching from GNOME Classic to GNOME and Back The user can switch from GNOME Classic to GNOME by logging out and clicking on the cogwheel to Sign In . The cogwheel opens a drop-down menu, which contains GNOME Classic. To switch from GNOME Classic to GNOME from within the user session, run the following command: To switch back to GNOME Classic from within the same user session, run the following command: 1.3.3. Disabling GNOME Classic as the Default Session For all newly created users on Red Hat Enterprise Linux 7, GNOME Classic is set as the default session. To override that setting for a specific user, you need to modify the user's account service in the /var/lib/AccountsService/users/ username file. See Section 14.3.2, "Configuring a User Default Session" for details on how to do that. Getting More Information Users can find more information on using GNOME 3, GNOME Shell, or GNOME Classic in GNOME Help, which is provided by the gnome-user-docs package. To access GNOME Help, press the Super key to enter the Activities Overview , type help , and then press Enter .
[ "gnome-shell --mode=user -r &", "gnome-shell --mode=classic -r &" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/what-is-gnome-classic
Red Hat Cloud Instance Type Policy Guide
Red Hat Cloud Instance Type Policy Guide Red Hat Certified Cloud and Service Provider Certification 2025 For Use with Red Hat Cloud Instance Type Policy Guide Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_policy_guide/index
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provided by IBM Power enables you to create internal cluster resources. This approach internally provisions base services. Then, all applications can access additional storage classes. Note Only internal OpenShift Data Foundation clusters are supported on IBM Power. See Planning your deployment for more information about deployment requirements. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabling key value backend path and policy in vault . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow the below steps in the order given: Install Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create OpenShift Data Foundation cluster on IBM Power . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker nodes in the cluster with locally attached storage devices on each of them. Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation. The devices to be used must be empty, that is, there should be no persistent volumes (PVs), volume groups (VGs), or local volumes (LVs) remaining on the disks. You must have a minimum of three labeled nodes. Each node that has local storage devices to be used by OpenShift Data Foundation must have a specific label to deploy OpenShift Data Foundation pods. To label the nodes, use the following command: For more information, see the Resource requirements section in the Planning guide. 1.2. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy.
[ "oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_power/preparing_to_deploy_openshift_data_foundation
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Although, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation, this deployment method is not supported on ROSA. Note Only internal OpenShift Data Foundation clusters are supported on ROSA. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub for ROSA with hosted control planes (HCP). Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the storage namespace: Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Fill in role ARN . For instruction to create a Amazon resource name (ARN), see Creating an AWS role using a script . Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Select a Namespace . Note openshift-storage Namespace is not recommended for ROSA deployments. Use a user defined namespace for this deployment. Avoid using "redhat" or "openshift" prefixes in namespaces. Important This guide uses <storage_namespace> as an example namespace. Replace <storage_namespace> with your defined namespace in later steps. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Manual updates strategy is recommended for ROSA with hosted control planes. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step, and <storage_namespace> is the namespace where ODF operator and StorageSystem were created. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Select <storage-namespace> from the Project drop-down list. Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select <storage-namespace> from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (2 pods on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) Monitoring prometheus-odf-prometheus-* (1 pod on any storage node) prometheus-operator-* (1 pod on any storage node) alertmanager-odf-alertmanager-* (1 pod on any storage node) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
[ "oc annotate namespace storage-namespace openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n <storage-namespace> create serviceaccount <serviceaccount_name>", "oc -n <storage-namespace> create serviceaccount odf-vault-auth", "oc -n <storage-namespace> create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n <storage-namespace> create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: <storage-namespace> annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n <storage_namespace> get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n <storage_namespace> get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=<storage_namespace> policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=<storage_namespace> policies=odf ttl=1440h" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/deploy-using-dynamic-storage-devices-rosa
Chapter 4. Deleting a subscription manifest
Chapter 4. Deleting a subscription manifest You can remove unused manifests for maintenance purposes. However, deleting an active manifest on a connected Satellite Server is intended for debugging purposes only. Before deleting an active manifest, it is important to note the consequences: All subscriptions attached to running hosts will be deleted. All subscriptions attached to activation keys will be deleted. Red Hat Insights will be disabled due to a lost connection to Red Hat Satellite. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You are connected to a Red Hat Satellite Server. You have Red Hat Satellite 6 or later. You have the Subscriptions administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To delete an unused manifest, complete the following steps: From the Hybrid Cloud Console home page, click Services > Subscriptions and Spend > Manifests . From the Manifests page, click the name of the manifest that you want to delete. Important Deleting a manifest will remove all the subscriptions attached to running hosts and activation keys. Red Hat Insights will be disabled. This operation is permanent and cannot be undone. From the Details panel, click Delete manifest . If you are sure that you want to delete the selected manifest, select the confirmation check box and then click Delete .
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_connected_satellite_server/proc-deleting-manifest-satellite-connected
7.212. tar
7.212. tar 7.212.1. RHBA-2015:1285 - tar bug fix update Updated tar packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The GNU tar program can save multiple files in one archive and restore the files from that archive. Bug Fixes BZ# 923359 Previously, when the "--verify" or "-W" option was used, the tar utility always exited with a status of 2, and false warning messagess per each archived file were printed. This behavior was a regression introduced in tar-1.23-11.el6. With this update, tar exits with a status of 2 only if there is a real problem with the archived files. BZ# 1034360 Prior to this update, tar interpreted an argument containing an unescaped "[" character and no corresponding "]" character as a pattern-matching string instead of an archive member name, unless the "--no-wildcard" option was used. Consequently, if a user wanted to extract an existing archive member with a path name containing the argument, tar failed to match the argument with the corresponding member, printed an error message, and eventually exited with a non-zero exit status. This problem has been fixed, and tar is now able to extract such a file. BZ# 1056672 Previously, tar did not automatically detect archives compressed by the xz program if the user did not specify the "-J" or "--xz" option on the command line. As a consequence, if the processed archive had the ".xz" extension, tar extracted or listed the contents of the archive but printed an error message and eventually exited with a non-zero exit status. If the archive did not have this extension, tar failed. With this update, the automatic recognition mechanism has been improved. As a result, tar no longer prints an error message in this scenario, and it extracts or lists the contents of such archives correctly regardless of the extension. BZ# 1119312 The tar(1) man page does not list all the available options; however, it now mentions the fact that complete information on using tar is available in the tar Info page, which can be displayed by running the "info tar" command. Users of tar are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-tar
Chapter 2. Installing API controller
Chapter 2. Installing API controller To install API controller use the community Operator . Prerequisites cluster-admin access to an OpenShift cluster. Procedure In the OpenShift Container Platform web console, log in with cluster-admin privileges. In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, enter Apicurio to find the Apicurio API Controller . Read the information about the Operator, and click Install to display the Operator subscription page. Accept the default subscription settings noting the following: Installation mode : All namespaces on the cluster (default) . Installed namespace : Select the namespace where you want to install the Operator, for example, api-controller . If the namespace does not already exist, click this field and select Create Project to create the namespace. Approval Strategy : Select Automatic or Manual . Click Install , and wait a few moments until the Operator is installed and ready for use. Verify that the Operator is installed. After you have installed the Operator, click Operators > Installed Operators to verify that the Apicurio API Controller is installed in your selected namespace, for example api-controller . Change to the Developer view in the OpenShift Container Platform web console to apply the YAML required for installation. Create a PostgreSQL database using the following YAML in the api-controller namespace: Create a CR named apicurio , and the required Routes using the following YAML in the api-controller namespace: Note Replace mycluster.example.com with your cluster hostname. Verification Navigate to the api-controller-studio-ui Route and click the Location URL. The Apicurio Studio console should be displayed.
[ "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-pvc namespace: api-controller spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi # Adjust the storage size as needed --- apiVersion: apps/v1 kind: Deployment metadata: namespace: \"api-controller\" labels: app: postgresql name: postgresql spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: initContainers: - name: init-data image: busybox command: ['sh', '-c', 'rm -rf /var/lib/postgresql/data/* && mkdir -p /var/lib/postgresql/data/pgdata'] volumeMounts: - mountPath: \"/var/lib/postgresql/data\" name: \"registry-pgdata\" containers: - name: postgresql image: quay.io/debezium/postgres:13-alpine ports: - containerPort: 5432 env: - name: POSTGRES_DB value: registry - name: POSTGRES_USER value: apicurio - name: POSTGRES_PASSWORD value: registry - name: PGDATA value: \"/var/lib/postgresql/data/pgdata\" volumeMounts: - mountPath: \"/var/lib/postgresql/data\" name: \"registry-pgdata\" volumes: - name: registry-pgdata persistentVolumeClaim: claimName: registry-pvc --- apiVersion: v1 kind: Service metadata: namespace: \"api-controller\" labels: app: postgresql name: postgresql-service spec: ports: - name: http port: 5432 protocol: TCP targetPort: 5432 selector: app: postgresql type: ClusterIP", "Replace mycluster.example.com with your cluster hostname Create an API Controller custom resource apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry3 metadata: name: apicurio namespace: api-controller spec: studioUi: enabled: true env: - name: APICURIO_REGISTRY_API_URL value: 'https://api-controller-app.apps.mycluster.example.com/apis/registry/v3' - name: APICURIO_REGISTRY_UI_URL value: 'https://api-controller-ui.apps.mycluster.example.com' ui: env: - name: REGISTRY_API_URL value: 'https://api-controller-app.apps.mycluster.example.com/apis/registry/v3' app: sql: dataSource: username: apicurio password: registry url: 'jdbc:postgresql://postgresql-service:5432/registry' --- Create a route for the Apicurio Registry API apiVersion: route.openshift.io/v1 kind: Route metadata: name: api-controller-registry-api namespace: api-controller spec: host: api-controller-app.apps.mycluster.example.com path: / to: kind: Service name: apicurio-app-service port: targetPort: http tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None --- Create a route for the Apicurio Registry UI apiVersion: route.openshift.io/v1 kind: Route metadata: name: api-controller-registry-ui namespace: api-controller spec: host: api-controller-ui.apps.mycluster.example.com path: / to: kind: Service name: apicurio-ui-service port: targetPort: http tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None --- Create a route for the Apicurio Studio UI apiVersion: route.openshift.io/v1 kind: Route metadata: name: api-controller-studio-ui namespace: api-controller spec: host: api-controller-studio-ui.apps.mycluster.example.com path: / to: kind: Service name: apicurio-studio-ui-service port: targetPort: http tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None" ]
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/getting_started_with_api_controller/proc-installing-api-controller
Tutorials
Tutorials Red Hat OpenShift Service on AWS 4 Red Hat OpenShift Service on AWS tutorials Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/index
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation Red Hat OpenShift Data Foundation deployment can be stretched between two different geographical locations to provide the storage infrastructure with disaster recovery capabilities. When faced with a disaster, such as one of the two locations is partially or totally not available, OpenShift Data Foundation deployed on the OpenShift Container Platform deployment must be able to survive. This solution is available only for metropolitan spanned data centers with specific latency requirements between the servers of the infrastructure. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. The following diagram shows the simplest deployment for a stretched cluster: OpenShift nodes and OpenShift Data Foundation daemons In the diagram the OpenShift Data Foundation monitor pod deployed in the Arbiter zone has a built-in tolerance for the master nodes. The diagram shows the master nodes in each Data Zone which are required for a highly available OpenShift Container Platform control plane. Also, it is important that the OpenShift Container Platform nodes in one of the zones have network connectivity with the OpenShift Container Platform nodes in the other two zones. Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. 5.1. Requirements for enabling stretch cluster Ensure you have addressed OpenShift Container Platform requirements for deployments spanning multiple sites. For more information, see knowledgebase article on cluster deployments spanning multiple sites . Ensure that you have at least three OpenShift Container Platform master nodes in three different zones. One master node in each of the three zones. Ensure that you have at least four OpenShift Container Platform worker nodes evenly distributed across the two Data Zones. For stretch clusters on bare metall, use the SSD drive as the root drive for OpenShift Container Platform master nodes. Ensure that each node is pre-labeled with its zone label. For more information, see the Applying topology zone labels to OpenShift Container Platform node section. The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms between zones. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. 5.2. Applying topology zone labels to OpenShift Container Platform nodes During a site outage, the zone that has the arbiter function makes use of the arbiter label. These labels are arbitrary and must be unique for the three locations. For example, you can label the nodes as follows: To apply the labels to the node: <NODENAME> Is the name of the node <LABEL> Is the topology zone label To validate the labels using the example labels for the three zones: <LABEL> Is the topology zone label Alternatively, you can run a single command to see all the nodes with its zone. The stretch cluster topology zone labels are now applied to the appropriate OpenShift Container Platform nodes to define the three locations. step Install the local storage operator from the OpenShift Container Platform web console . 5.3. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.4. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least four worker nodes evenly distributed across two data centers in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see Planning your deployment . Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command-line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you selected Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. steps Create an OpenShift Data Foundation cluster . 5.5. Creating OpenShift Data Foundation cluster Prerequisites Ensure that you have met all the requirements in Requirements for enabling stretch cluster section. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the Create a new StorageClass using the local storage devices option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on selected nodes. Important If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select SSD or NVMe to build a supported configuration. You can select HDDs for unsupported test installations. Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Select Enable arbiter checkbox if you want to use the stretch clusters. This option is available only when all the prerequisites for arbiter are fulfilled and the selected nodes are populated. For more information, see Arbiter stretch cluster requirements in Requirements for enabling stretch cluster . Select the arbiter zone from the dropdown list. Choose a performance profile for Configure performance . You can also configure the performance profile after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Network is set to Default (OVN) if you are using a single network. You can switch to Custom (Multus) if you are using multiple network interfaces and then choose any one of the following: Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . In the Data Protection page, click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. For arbiter mode of deployment: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the arbiter key in the spec section and ensure enable is set to true . To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . 5.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 5.6.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 5.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 5.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (5 pods are distributed across 3 zones, 2 per data-center zones and 1 in arbiter zone) MGR rook-ceph-mgr-* (2 pods on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods are distributed across 2 data-center zones) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (2 pods are distributed across 2 data-center zones) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node and 1 pod in arbiter zone) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 5.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 5.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 5.6.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 5.7. Install Zone Aware Sample Application Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, stretch cluster setup is configured correctly. Important With latency between the data zones, you can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). The rate of or amount of performance degradation depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with stretch cluster configuration to ensure sufficient application performance for the required service levels. A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader. Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage: Note This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a stretched cluster with zone awareness and high availability. Create a new project. Deploy the example PHP application called file-uploader. Example Output: View the build log and wait until the application is deployed. Example Output: The command prompt returns out of the tail mode after you see Push successful . Note The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence the OpenShift route resource is not created by default. You need to create the route manually. Scaling the application Scale the application to four replicas and expose its services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.1. Scaling the application after installation Procedure Scale the application to four replicas and expose its services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.2. Modify Deployment to be Zone Aware Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints . Add the pod placement rule in the application deployment configuration to make the application zone aware. Run the following command, and review the output: Example Output: Edit the deployment to use the topology zone labels. Add add the following new lines between the Start and End (shown in the output in the step): Example output: Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement. Scaling down to zero pods Example output: Scaling up to four pods Example output: Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones. Example output: Search for the zone labels used. Example output: Use the file-uploader web application using your browser to upload new files. Find the route that is created. Example Output: Point your browser to the web application using the route in the step. The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing. Select an arbitrary file from your local machine and upload it to the application. Click Choose file to select an arbitrary file. Click Upload . Figure 5.1. A simple PHP-based file upload tool Click List uploaded files to see the list of all currently uploaded files. Note The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware. 5.8. Recovering OpenShift Data Foundation stretch cluster Given that the stretch cluster disaster recovery solution is to provide resiliency in the face of a complete or partial site outage, it is important to understand the different methods of recovery for applications and their storage. How the application is architected determines how soon it becomes available again on the active zone. There are different methods of recovery for applications and their storage depending on the site outage. The recovery time depends on the application architecture. The different methods of recovery are as follows: Recovering zone-aware HA applications with RWX storage . Recovering HA applications with RWX storage . Recovering applications with RWO storage . Recovering StatefulSet pods . 5.8.1. Understanding zone failure For the purpose of this section, zone failure is considered as a failure where all OpenShift Container Platform, master and worker nodes in a zone are no longer communicating with the resources in the second data zone (for example, powered down nodes). If communication between the data zones is still partially working (intermittently up or down), the cluster, storage, and network admins should disconnect the communication path between the data zones for recovery to succeed. Important When you install the sample application, power off the OpenShift Container Platform nodes (at least the nodes with OpenShift Data Foundation devices) to test the failure of a data zone in order to validate that your file-uploader application is available, and you can upload new files. 5.8.2. Recovering zone-aware HA applications with RWX storage Applications that are deployed with topologyKey: topology.kubernetes.io/zone have one or more replicas scheduled in each data zone, and are using shared storage, that is, ReadWriteMany (RWX) CephFS volume, terminate themselves in the failed zone after few minutes and new pods are rolled in and stuck in pending state until the zones are recovered. An example of this type of application is detailed in the Install Zone Aware Sample Application section. Important During zone recovery if application pods go into CrashLoopBackOff (CLBO) state with permission denied error while mounting the CephFS volume, then restart the nodes where the pods are scheduled. Wait for some time and then check if the pods are running again. 5.8.3. Recovering HA applications with RWX storage Applications that are using topologyKey: kubernetes.io/hostname or no topology configuration have no protection against all of the application replicas being in the same zone. Note This can happen even with podAntiAffinity and topologyKey: kubernetes.io/hostname in the Pod spec because this anti-affinity rule is host-based and not zone-based. If this happens and all replicas are located in the zone that fails, the application using ReadWriteMany (RWX) storage takes 6-8 minutes to recover on the active zone. This pause is for the OpenShift Container Platform nodes in the failed zone to become NotReady (60 seconds) and then for the default pod eviction timeout to expire (300 seconds). 5.8.4. Recovering applications with RWO storage Applications that use ReadWriteOnce (RWO) storage have a known behavior described in this Kubernetes issue . Because of this issue, if there is a data zone failure, any application pods in that zone mounting RWO volumes (for example, cephrbd based volumes) are stuck with Terminating status after 6-8 minutes and are not re-created on the active zone without manual intervention. Check the OpenShift Container Platform nodes with a status of NotReady . There may be an issue that prevents the nodes from communicating with the OpenShift control plane. However, the nodes may still be performing I/O operations against Persistent Volumes (PVs). If two pods are concurrently writing to the same RWO volume, there is a risk of data corruption. Ensure that processes on the NotReady node are either terminated or blocked until they are terminated. Example solutions: Use an out of band management system to power off a node, with confirmation, to ensure process termination. Withdraw a network route that is used by nodes at a failed site to communicate with storage. Note Before restoring service to the failed zone or nodes, confirm that all the pods with PVs have terminated successfully. To get the Terminating pods to recreate on the active zone, you can either force delete the pod or delete the finalizer on the associated PV. Once one of these two actions are completed, the application pod should recreate on the active zone and successfully mount its RWO storage. Force deleting the pod Force deletions do not wait for confirmation from the kubelet that the pod has been terminated. <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Deleting the finalizer on the associated PV Find the associated PV for the Persistent Volume Claim (PVC) that is mounted by the Terminating pod and delete the finalizer using the oc patch command. <PV_NAME> Is the name of the PV An easy way to find the associated PV is to describe the Terminating pod. If you see a multi-attach warning, it should have the PV names in the warning (for example, pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c ). <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Example output: 5.8.5. Recovering StatefulSet pods Pods that are part of a StatefulSet have a similar issue as pods mounting ReadWriteOnce (RWO) volumes. More information is referenced in the Kubernetes resource StatefulSet considerations . To get the pods part of a StatefulSet to re-create on the active zone after 6-8 minutes you need to force delete the pod with the same requirements (that is, OpenShift Container Platform node powered off or communication disconnected) as pods with RWO volumes.
[ "topology.kubernetes.io/zone=arbiter for Master0 topology.kubernetes.io/zone=datacenter1 for Master1, Worker1, Worker2 topology.kubernetes.io/zone=datacenter2 for Master2, Worker3, Worker4", "oc label node <NODENAME> topology.kubernetes.io/zone= <LABEL>", "oc get nodes -l topology.kubernetes.io/zone= <LABEL> -o name", "oc get nodes -L topology.kubernetes.io/zone", "oc annotate namespace openshift-storage openshift.io/node-selector=", "spec: arbiter: enable: true [..] nodeTopologies: arbiterLocation: arbiter #arbiter zone storageDeviceSets: - config: {} count: 1 [..] replica: 4 status: conditions: [..] failureDomain: zone", "oc new-project my-shared-storage", "oc new-app openshift/php:latest~https://github.com/mashetty330/openshift-php-upload-demo --name=file-uploader", "Found image 4f2dcc0 (9 days old) in image stream \"openshift/php\" under tag \"7.2-ubi8\" for \"openshift/php:7.2- ubi8\" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag \"file-uploader:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources imagestream.image.openshift.io \"file-uploader\" created buildconfig.build.openshift.io \"file-uploader\" created deployment.apps \"file-uploader\" created service \"file-uploader\" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.", "oc logs -f bc/file-uploader -n my-shared-storage", "Cloning \"https://github.com/christianh814/openshift-php-upload-demo\" [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL \"io.openshift.build.commit.author\"=\"Christian Hernandez <christ [email protected]>\" \"io.openshift.build.commit.date\"=\"Sun Oct 1 1 7:15:09 2017 -0700\" \"io.openshift.build.commit.id\"=\"288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2\" \"io.openshift.build.commit.ref\"=\"master\" \" io.openshift.build.commit.message\"=\"trying to modularize\" \"io.openshift .build.source-location\"=\"https://github.com/christianh814/openshift-php-uploa d-demo\" \"io.openshift.build.image\"=\"image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea\" STEP 3: ENV OPENSHIFT_BUILD_NAME=\"file-uploader-1\" OPENSHIFT_BUILD_NAMESP ACE=\"my-shared-storage\" OPENSHIFT_BUILD_SOURCE=\"https://github.com/christ ianh814/openshift-php-upload-demo\" OPENSHIFT_BUILD_COMMIT=\"288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2\" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source => sourcing 20-copy-config.sh ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i => sourcing 00-documentroot.conf => sourcing 50-mpm-tuning.conf => sourcing 40-ssl-certs.sh STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]", "oc expose svc/file-uploader -n my-shared-storage", "oc scale --replicas=4 deploy/file-uploader -n my-shared-storage", "oc get pods -o wide -n my-shared-storage", "oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage", "oc get pvc -n my-shared-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s", "oc expose svc/file-uploader -n my-shared-storage", "oc scale --replicas=4 deploy/file-uploader -n my-shared-storage", "oc get pods -o wide -n my-shared-storage", "oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage", "oc get pvc -n my-shared-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s", "oc get deployment file-uploader -o yaml -n my-shared-storage | less", "[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]", "oc edit deployment file-uploader -n my-shared-storage", "[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: \"\" containers: [...]", "deployment.apps/file-uploader edited", "oc scale deployment file-uploader --replicas=0 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc scale deployment file-uploader --replicas=4 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print USD7}' | sort | uniq -c", "1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr", "oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master", "perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2", "oc get route file-uploader -n my-shared-storage -o jsonpath --template=\"http://{.spec.host}{'\\n'}\"", "http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com", "oc delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>", "oc patch -n openshift-storage pv/ <PV_NAME> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge", "oc describe pod <PODNAME> --namespace <NAMESPACE>", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m5s default-scheduler Successfully assigned openshift-storage/noobaa-db-pg-0 to perf1-mz8bt-worker-d2hdm Warning FailedAttachVolume 4m5s attachdetach-controller Multi-Attach error for volume \"pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c\" Volume is already exclusively attached to one node and can't be attached to another" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-stretch-cluster-disaster-recovery_stretch-cluster
Chapter 9. PreprovisioningImage [metal3.io/v1alpha1]
Chapter 9. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PreprovisioningImageSpec defines the desired state of PreprovisioningImage status object PreprovisioningImageStatus defines the observed state of PreprovisioningImage 9.1.1. .spec Description PreprovisioningImageSpec defines the desired state of PreprovisioningImage Type object Property Type Description acceptFormats array (string) acceptFormats is a list of acceptable image formats. architecture string architecture is the processor architecture for which to build the image. networkDataName string networkDataName is the name of a Secret in the local namespace that contains network data to build in to the image. 9.1.2. .status Description PreprovisioningImageStatus defines the observed state of PreprovisioningImage Type object Property Type Description architecture string architecture is the processor architecture for which the image is built conditions array conditions describe the state of the built image conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } extraKernelParams string extraKernelParams is a string with extra parameters to pass to the kernel when booting the image over network. Only makes sense for initrd images. format string format is the type of image that is available at the download url: either iso or initrd. imageUrl string imageUrl is the URL from which the built image can be downloaded. kernelUrl string kernelUrl is the URL from which the kernel of the image can be downloaded. Only makes sense for initrd images. networkData object networkData is a reference to the version of the Secret containing the network data used to build the image. 9.1.3. .status.conditions Description conditions describe the state of the built image Type array 9.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 9.1.5. .status.networkData Description networkData is a reference to the version of the Secret containing the network data used to build the image. Type object Property Type Description name string version string 9.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/preprovisioningimages GET : list objects of kind PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages DELETE : delete collection of PreprovisioningImage GET : list objects of kind PreprovisioningImage POST : create a PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name} DELETE : delete a PreprovisioningImage GET : read the specified PreprovisioningImage PATCH : partially update the specified PreprovisioningImage PUT : replace the specified PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name}/status GET : read status of the specified PreprovisioningImage PATCH : partially update status of the specified PreprovisioningImage PUT : replace status of the specified PreprovisioningImage 9.2.1. /apis/metal3.io/v1alpha1/preprovisioningimages Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PreprovisioningImage Table 9.2. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImageList schema 401 - Unauthorized Empty 9.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages Table 9.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PreprovisioningImage Table 9.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PreprovisioningImage Table 9.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.8. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImageList schema 401 - Unauthorized Empty HTTP method POST Description create a PreprovisioningImage Table 9.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.10. Body parameters Parameter Type Description body PreprovisioningImage schema Table 9.11. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 202 - Accepted PreprovisioningImage schema 401 - Unauthorized Empty 9.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name} Table 9.12. Global path parameters Parameter Type Description name string name of the PreprovisioningImage namespace string object name and auth scope, such as for teams and projects Table 9.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PreprovisioningImage Table 9.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.15. Body parameters Parameter Type Description body DeleteOptions schema Table 9.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PreprovisioningImage Table 9.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.18. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PreprovisioningImage Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body Patch schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PreprovisioningImage Table 9.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.23. Body parameters Parameter Type Description body PreprovisioningImage schema Table 9.24. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 401 - Unauthorized Empty 9.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name}/status Table 9.25. Global path parameters Parameter Type Description name string name of the PreprovisioningImage namespace string object name and auth scope, such as for teams and projects Table 9.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PreprovisioningImage Table 9.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.28. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PreprovisioningImage Table 9.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.30. Body parameters Parameter Type Description body Patch schema Table 9.31. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PreprovisioningImage Table 9.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.33. Body parameters Parameter Type Description body PreprovisioningImage schema Table 9.34. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/provisioning_apis/preprovisioningimage-metal3-io-v1alpha1
6.14.4. Logging
6.14.4. Logging You can enable debugging for all daemons in a cluster, or you can enable logging for specific cluster processing. To enable debugging for all daemons, execute the following command. By default, logging is directed to the /var/log/cluster/ daemon .log file. For example, the following command enables debugging for all daemons. Note that this command resets all other properties that you can set with the --setlogging option to their default values, as described in Section 6.1.5, "Commands that Overwrite Settings" . To enable debugging for an individual cluster process, execute the following command. Per-daemon logging configuration overrides the global settings. For example, the following commands enable debugging for the corosync and fenced daemons. To remove the log settings for individual daemons, use the following command. For example, the following command removes the daemon-specific log settings for the fenced daemon For a list of the logging daemons for which you can enable logging as well as the additional logging options you can configure for both global and per-daemon logging, see the cluster.conf (5) man page. Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" .
[ "ccs -h host --setlogging [logging options]", "ccs -h node1.example.com --setlogging debug=on", "ccs -h host --addlogging [logging daemon options]", "ccs -h node1.example.com --addlogging name=corosync debug=on ccs -h node1.example.com --addlogging name=fenced debug=on", "ccs -h host --rmlogging name= clusterprocess", "ccs -h host --rmlogging name=fenced" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-logconfig-ccs-CA
Chapter 19. Deploying distributed units at scale in a disconnected environment
Chapter 19. Deploying distributed units at scale in a disconnected environment Use zero touch provisioning (ZTP) to provision distributed units at new edge sites in a disconnected environment. The workflow starts when the site is connected to the network and ends with the CNF workload deployed and running on the site nodes. Important ZTP for RAN deployments is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 19.1. Provisioning edge sites at scale Telco edge computing presents extraordinary challenges with managing hundreds to tens of thousands of clusters in hundreds of thousands of locations. These challenges require fully-automated management solutions with, as closely as possible, zero human interaction. Zero touch provisioning (ZTP) allows you to provision new edge sites with declarative configurations of bare-metal equipment at remote sites. Template or overlay configurations install OpenShift Container Platform features that are required for CNF workloads. End-to-end functional test suites are used to verify CNF related features. All configurations are declarative in nature. You start the workflow by creating declarative configurations for ISO images that are delivered to the edge nodes to begin the installation process. The images are used to repeatedly provision large numbers of nodes efficiently and quickly, allowing you keep up with requirements from the field for far edge nodes. Service providers are deploying a more distributed mobile network architecture allowed by the modular functional framework defined for 5G. This allows service providers to move from appliance-based radio access networks (RAN) to open cloud RAN architecture, gaining flexibility and agility in delivering services to end users. The following diagram shows how ZTP works within a far edge framework. 19.2. The GitOps approach ZTP uses the GitOps deployment set of practices for infrastructure deployment that allows developers to perform tasks that would otherwise fall under the purview of IT operations. GitOps achieves these tasks using declarative specifications stored in Git repositories, such as YAML files and other defined patterns, that provide a framework for deploying the infrastructure. The declarative output is leveraged by the Open Cluster Manager for multisite deployment. One of the motivators for a GitOps approach is the requirement for reliability at scale. This is a significant challenge that GitOps helps solve. GitOps addresses the reliability issue by providing traceability, RBAC, and a single source of truth for the desired state of each site. Scale issues are addressed by GitOps providing structure, tooling, and event driven operations through webhooks. 19.3. About ZTP and distributed units on single nodes You can install a distributed unit (DU) on a single node at scale with Red Hat Advanced Cluster Management (RHACM) (ACM) using the assisted installer (AI) and the policy generator with core-reduction technology enabled. The DU installation is done using zero touch provisioning (ZTP) in a disconnected environment. ACM manages clusters in a hub and spoke architecture, where a single hub cluster manages many spoke clusters. ACM applies radio access network (RAN) policies from predefined custom resources (CRs). Hub clusters running ACM provision and deploy the spoke clusters using ZTP and AI. DU installation follows the AI installation of OpenShift Container Platform on a single node. The AI service handles provisioning of OpenShift Container Platform on single nodes running on bare metal. ACM ships with and deploys the assisted installer when the MultiClusterHub custom resource is installed. With ZTP and AI, you can provision OpenShift Container Platform single nodes to run your DUs at scale. A high level overview of ZTP for distributed units in a disconnected environment is as follows: A hub cluster running ACM manages a disconnected internal registry that mirrors the OpenShift Container Platform release images. The internal registry is used to provision the spoke single nodes. You manage the bare-metal host machines for your DUs in an inventory file that uses YAML for formatting. You store the inventory file in a Git repository. You install the DU bare-metal host machines on site, and make the hosts ready for provisioning. To be ready for provisioning, the following is required for each bare-metal host: Network connectivity - including DNS for your network. Hosts should be reachable through the hub and managed spoke clusters. Ensure there is layer 3 connectivity between the hub and the host where you want to install your hub cluster. Baseboard Management Controller (BMC) details for each host - ZTP uses BMC details to connect the URL and credentials for accessing the BMC. Create spoke cluster definition CRs. These define the relevant elements for the managed clusters. Required CRs are as follows: Custom Resource Description Namespace Namespace for the managed single-node cluster. BMCSecret CR Credentials for the host BMC. Image Pull Secret CR Pull secret for the disconnected registry. AgentClusterInstall Specifies the single-node cluster's configuration such as networking, number of supervisor (control plane) nodes, and so on. ClusterDeployment Defines the cluster name, domain, and other details. KlusterletAddonConfig Manages installation and termination of add-ons on the ManagedCluster for ACM. ManagedCluster Describes the managed cluster for ACM. InfraEnv Describes the installation ISO to be mounted on the destination node that the assisted installer service creates. This is the final step of the manifest creation phase. BareMetalHost Describes the details of the bare-metal host, including BMC and credentials details. When a change is detected in the host inventory repository, a host management event is triggered to provision the new or updated host. The host is provisioned. When the host is provisioned and successfully rebooted, the host agent reports Ready status to the hub cluster. 19.4. Zero touch provisioning building blocks ACM deploys single-node OpenShift, which is OpenShift Container Platform installed on single nodes, leveraging zero touch provisioning (ZTP). The initial site plan is broken down into smaller components and initial configuration data is stored in a Git repository. Zero touch provisioning uses a declarative GitOps approach to deploy these nodes. The deployment of the nodes includes: Installing the host operating system (RHCOS) on a blank server. Deploying OpenShift Container Platform on single nodes. Creating cluster policies and site subscriptions. Leveraging a GitOps deployment topology for a develop once, deploy anywhere model. Making the necessary network configurations to the server operating system. Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV. Downloading images needed to run workloads (CNFs). 19.5. Single-node clusters You use zero touch provisioning (ZTP) to deploy single-node OpenShift clusters to run distributed units (DUs) on small hardware footprints at disconnected far edge sites. A single-node cluster runs OpenShift Container Platform on top of one bare-metal host, hence the single node. Edge servers contain a single node with supervisor functions and worker functions on the same host that are deployed at low bandwidth or disconnected edge sites. OpenShift Container Platform is configured on the single node to use workload partitioning. Workload partitioning separates cluster management workloads from user workloads and can run the cluster management workloads on a reserved set of CPUs. Workload partitioning is useful for resource-constrained environments, such as single-node production deployments, where you want to reserve most of the CPU resources for user workloads and configure OpenShift Container Platform to use fewer CPU resources within the host. A single-node cluster hosting a DU application on a node is divided into the following configuration categories: Common - Values are the same for all single-node cluster sites managed by a hub cluster. Pools of sites - Common across a pool of sites where a pool size can be 1 to n . Site specific - Likely specific to a site with no overlap with other sites, for example, a vlan. 19.6. Site planning considerations for distributed unit deployments Site planning for distributed units (DU) deployments is complex. The following is an overview of the tasks that you complete before the DU hosts are brought online in the production environment. Develop a network model. The network model depends on various factors such as the size of the area of coverage, number of hosts, projected traffic load, DNS, and DHCP requirements. Decide how many DU radio nodes are required to provide sufficient coverage and redundancy for your network. Develop mechanical and electrical specifications for the DU host hardware. Develop a construction plan for individual DU site installations. Tune host BIOS settings for production, and deploy the BIOS configuration to the hosts. Install the equipment on-site, connect hosts to the network, and apply power. Configure on-site switches and routers. Perform basic connectivity tests for the host machines. Establish production network connectivity, and verify host connections to the network. Provision and deploy on-site DU hosts at scale. Test and verify on-site operations, performing load and scale testing of the DU hosts before finally bringing the DU infrastructure online in the live production environment. 19.7. Low latency for distributed units (DUs) Low latency is an integral part of the development of 5G networks. Telecommunications networks require as little signal delay as possible to ensure quality of service in a variety of critical use cases. Low latency processing is essential for any communication with timing constraints that affect functionality and security. For example, 5G Telco applications require a guaranteed one millisecond one-way latency to meet Internet of Things (IoT) requirements. Low latency is also critical for the future development of autonomous vehicles, smart factories, and online gaming. Networks in these environments require almost a real-time flow of data. Low latency systems are about guarantees with regards to response and processing times. This includes keeping a communication protocol running smoothly, ensuring device security with fast responses to error conditions, or just making sure a system is not lagging behind when receiving a lot of data. Low latency is key for optimal synchronization of radio transmissions. OpenShift Container Platform enables low latency processing for DUs running on COTS hardware by using a number of technologies and specialized hardware devices: Real-time kernel for RHCOS Ensures workloads are handled with a high degree of process determinism. CPU isolation Avoids CPU scheduling delays and ensures CPU capacity is available consistently. NUMA awareness Aligns memory and huge pages with CPU and PCI devices to pin guaranteed container memory and huge pages to the NUMA node. This decreases latency and improves performance of the node. Huge pages memory management Using huge page sizes improves system performance by reducing the amount of system resources required to access page tables. Precision timing synchronization using PTP Allows synchronization between nodes in the network with sub-microsecond accuracy. 19.8. Configuring BIOS for distributed unit bare-metal hosts Distributed unit (DU) hosts require the BIOS to be configured before the host can be provisioned. The BIOS configuration is dependent on the specific hardware that runs your DUs and the particular requirements of your installation. Important In this Developer Preview release, configuration and tuning of BIOS for DU bare-metal host machines is the responsibility of the customer. Automatic setting of BIOS is not handled by the zero touch provisioning workflow. Procedure Set the UEFI/BIOS Boot Mode to UEFI . In the host boot sequence order, set Hard drive first . Apply the specific BIOS configuration for your hardware. The following table describes a representative BIOS configuration for an Intel Xeon Skylake or Intel Cascade Lake server, based on the Intel FlexRAN 4G and 5G baseband PHY reference design. Important The exact BIOS configuration depends on your specific hardware and network requirements. The following sample configuration is for illustrative purposes only. Table 19.1. Sample BIOS configuration for an Intel Xeon Skylake or Cascade Lake server BIOS Setting Configuration CPU Power and Performance Policy Performance Uncore Frequency Scaling Disabled Performance P-limit Disabled Enhanced Intel SpeedStep (R) Tech Enabled Intel Configurable TDP Enabled Configurable TDP Level Level 2 Intel(R) Turbo Boost Technology Enabled Energy Efficient Turbo Disabled Hardware P-States Disabled Package C-State C0/C1 state C1E Disabled Processor C6 Disabled Note Enable global SR-IOV and VT-d settings in the BIOS for the host. These settings are relevant to bare-metal environments. 19.9. Preparing the disconnected environment Before you can provision distributed units (DU) at scale, you must install Red Hat Advanced Cluster Management (RHACM), which handles the provisioning of the DUs. RHACM is deployed as an Operator on the OpenShift Container Platform hub cluster. It controls clusters and applications from a single console with built-in security policies. RHACM provisions and manage your DU hosts. To install RHACM in a disconnected environment, you create a mirror registry that mirrors the Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that provision the DU bare-metal host operating system. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. You can also use this procedure in unrestricted networks to ensure your clusters only use container images that have satisfied your organizational controls on external content. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet. If you do not have access to a mirror host, use the disconnected procedure to copy images to a device that you can move across network boundaries. 19.9.1. Disconnected environment prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support. Note Red Hat does not test third party registries with OpenShift Container Platform. 19.9.2. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional resources For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 19.9.3. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 19.9.3.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 19.9.3.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Prerequisites You configured a mirror registry to use in your disconnected environment. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager and save it to a .json file. Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Make a copy of your pull secret in JSON format: USD cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. Save the file either as ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json . The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Edit the new file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 19.9.3.3. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates that do not set a Subject Alternative Name, you must precede the oc commands in this procedure with GODEBUG=x509ignoreCN=0 . If you do not set this variable, the oc commands will fail with the following error: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0 Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your server, such as x86_64 : USD ARCHITECTURE=<server_architecture> Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 19.9.3.4. Adding RHCOS ISO and RootFS images to a disconnected mirror host Before you install a cluster on infrastructure that you provision, you must create Red Hat Enterprise Linux CoreOS (RHCOS) machines for it to use. Use a disconnected mirror to host the RHCOS images you require to provision your distributed unit (DU) bare-metal hosts. Prerequisites Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the DU hosts. RHCOS qcow2 images are not supported for this installation type. Procedure Log in to the mirror host. Obtain the RHCOS ISO and RootFS images from mirror.openshift.com , for example: Export the required image names and OpenShift Container Platform version as environment variables: USD export ISO_IMAGE_NAME=<iso_image_name> 1 USD export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1 USD export OCP_VERSION=<ocp_version> 1 1 ISO image name, for example, rhcos-4.9.0-fc.1-x86_64-live.x86_64.iso 1 RootFS image name, for example, rhcos-4.9.0-fc.1-x86_64-live-rootfs.x86_64.img 1 OpenShift Container Platform version, for example, latest-4.9 Download the required images: USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME} USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME} Verification steps Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example: USD wget http://USD(hostname)/USD{ISO_IMAGE_NAME} Expected output ... Saving to: rhcos-4.9.0-fc.1-x86_64-live.x86_64.iso rhcos-4.9.0-fc.1-x86_64- 11%[====> ] 10.01M 4.71MB/s ... 19.10. Installing Red Hat Advanced Cluster Management in a disconnected environment You use Red Hat Advanced Cluster Management (RHACM) on a hub cluster in the disconnected environment to manage the deployment of distributed unit (DU) profiles on multiple managed spoke clusters. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Configure a disconnected mirror registry for use in the cluster. Note If you want to deploy Operators to the spoke clusters, you must also add them to this registry. See Mirroring an Operator catalog for more information. Procedure Install RHACM on the hub cluster in the disconnected environment. See Installing RHACM in a disconnected environment . 19.11. Enabling assisted installer service on bare metal The Assisted Installer Service (AIS) deploys OpenShift Container Platform clusters. Red Hat Advanced Cluster Management (RHACM) ships with AIS. AIS is deployed when you enable the MultiClusterHub Operator on the RHACM hub cluster. For distributed units (DUs), RHACM supports OpenShift Container Platform deployments that run on a single bare-metal host. The single-node cluster acts as both a control plane and a worker node. Prerequisites Install OpenShift Container Platform 4.9 on a hub cluster. Install RHACM and create the MultiClusterHub resource. Create persistent volume custom resources (CR) for database and file system storage. You have installed the OpenShift CLI ( oc ). Procedure Modify the HiveConfig resource to enable the feature gate for Assisted Installer: USD oc patch hiveconfig hive --type merge -p '{"spec":{"targetNamespace":"hive","logLevel":"debug","featureGates":{"custom":{"enabled":["AlphaAgentInstallStrategy"]},"featureSet":"Custom"}}}' Modify the Provisioning resource to allow the Bare Metal Operator to watch all namespaces: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}' Create the AgentServiceConfig CR. Save the following YAML in the agent_service_config.yaml file: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> 1 filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> 2 osImages: 3 - openshiftVersion: "<ocp_version>" 4 version: "<ocp_release_version>" 5 url: "<iso_url>" 6 rootFSUrl: "<root_fs_url>" 7 cpuArchitecture: "x86_64" 1 Volume size for the databaseStorage field, for example 10Gi . 2 Volume size for the filesystemStorage field, for example 20Gi . 3 List of OS image details. Example describes a single OpenShift Container Platform OS version. 4 OpenShift Container Platform version to install, for example, 4.8 . 5 Specific install version, for example, 47.83.202103251640-0 . 6 ISO url, for example, https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.7/rhcos-4.7.7-x86_64-live.x86_64.iso . 7 Root FS image URL, for example, https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.7/rhcos-live-rootfs.x86_64.img Create the AgentServiceConfig CR by running the following command: USD oc create -f agent_service_config.yaml Example output agentserviceconfig.agent-install.openshift.io/agent created 19.12. ZTP custom resources Zero touch provisioning (ZTP) uses custom resource (CR) objects to extend the Kubernetes API or introduce your own API into a project or a cluster. These CRs contain the site-specific data required to install and configure a cluster for RAN applications. A custom resource definition (CRD) file defines your own object kinds. Deploying a CRD into the managed cluster causes the Kubernetes API server to begin serving the specified CR for the entire lifecycle. For each CR in the <site>.yaml file on the managed cluster, ZTP uses the data to create installation CRs in a directory named for the cluster. ZTP provides two ways for defining and installing CRs on managed clusters: a manual approach when you are provisioning a single cluster and an automated approach when provisioning multiple clusters. Manual CR creation for single clusters Use this method when you are creating CRs for a single cluster. This is a good way to test your CRs before deploying on a larger scale. Automated CR creation for multiple managed clusters Use the automated SiteConfig method when you are installing multiple managed clusters, for example, in batches of up to 100 clusters. SiteConfig uses ArgoCD as the engine for the GitOps method of site deployment. After completing a site plan that contains all of the required parameters for deployment, a policy generator creates the manifests and applies them to the hub cluster. Both methods create the CRs shown in the following table. On the cluster site, an automated Discovery image ISO file creates a directory with the site name and a file with the cluster name. Every cluster has its own namespace, and all of the CRs are under that namespace. The namespace and the CR names match the cluster name. Resource Description Usage BareMetalHost Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. Provides access to the BMC in order to load and boot the Discovery image ISO on the target server by using the Redfish protocol. InfraEnv Contains information for pulling OpenShift Container Platform onto the target bare-metal host. Used with ClusterDeployment to generate the Discovery ISO for the managed cluster. AgentClusterInstall Specifies the managed cluster's configuration such as networking and the number of supervisor (control plane) nodes. Shows the kubeconfig and credentials when the installation is complete. Specifies the managed cluster configuration information and provides status during the installation of the cluster. ClusterDeployment References the AgentClusterInstall to use. Used with InfraEnv to generate the Discovery ISO for the managed cluster. NMStateConfig Provides network configuration information such as MAC to IP mapping, DNS server, default route, and other network settings. This is not needed if DHCP is used. Sets up a static IP address for the managed cluster's Kube API server. Agent Contains hardware information about the target bare-metal host. Created automatically on the hub when the target machine's Discovery image ISO boots. ManagedCluster When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. The hub uses this resource to manage and show the status of managed clusters. KlusterletAddonConfig Contains the list of services provided by the hub to be deployed to a ManagedCluster . Tells the hub which addon services to deploy to a ManagedCluster . Namespace Logical space for ManagedCluster resources existing on the hub. Unique per site. Propagates resources to the ManagedCluster . Secret Two custom resources are created: BMC Secret and Image Pull Secret . BMC Secret authenticates into the target bare-metal host using its username and password. Image Pull Secret contains authentication information for the OpenShift Container Platform image installed on the target bare-metal host. ClusterImageSet Contains OpenShift Container Platform image information such as the repository and image name. Passed into resources to provide OpenShift Container Platform images. 19.13. Creating custom resources to install a single managed cluster This procedure tells you how to manually create and deploy a single managed cluster. If you are creating multiple clusters, perhaps hundreds, use the SiteConfig method described in "Creating ZTP custom resources for multiple managed clusters". Prerequisites Enable Assisted Installer Service. Ensure network connectivity: The container within the hub must be able to reach the Baseboard Management Controller (BMC) address of the target bare-metal host. The managed cluster must be able to resolve and reach the hub's API hostname and *.app hostname. Example of the hub's API and *.app hostname: console-openshift-console.apps.hub-cluster.internal.domain.com api.hub-cluster.internal.domain.com The hub must be able to resolve and reach the API and *.app hostname of the managed cluster. Here is an example of the managed cluster's API and *.app hostname: console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com api.sno-managed-cluster-1.internal.domain.com A DNS Server that is IP reachable from the target bare-metal host. A target bare-metal host for the managed cluster with the following hardware minimums: 4 CPU or 8 vCPU 32 GiB RAM 120 GiB Disk for root filesystem When working in a disconnected environment, the release image needs to be mirrored. Use this command to mirror the release image: oc adm release mirror -a <pull_secret.json> --from=quay.io/openshift-release-dev/ocp-release:{{ mirror_version_spoke_release }} --to={{ provisioner_cluster_registry }}/ocp4 --to-release-image={{ provisioner_cluster_registry }}/ocp4:{{ mirror_version_spoke_release }} You mirrored the ISO and rootfs used to generate the spoke cluster ISO to an HTTP server and configured the settings to pull images from there. The images must match the version of the ClusterImageSet . To deploy a 4.9.0 version, the rootfs and ISO need to be set at 4.9.0. Procedure Create a ClusterImageSet for each specific cluster version that needs to be deployed. A ClusterImageSet has the following format: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.9.0-rc.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.9.0-x86_64 2 1 The descriptive version that you want to deploy. 2 Points to the specific release image to deploy. Create the Namespace definition for the managed cluster: apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2 1 2 The name of the managed cluster to provision. Create the BMC Secret custom resource: apiVersion: v1 data: password: <bmc_password> 1 username: <bmc_username> 2 kind: Secret metadata: name: <cluster_name>-bmc-secret namespace: <cluster_name> type: Opaque 1 The password to the target bare-metal host. Must be base-64 encoded. 2 The username to the target bare-metal host. Must be base-64 encoded. Create the Image Pull Secret custom resource: apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: <cluster_name> type: kubernetes.io/dockerconfigjson 1 The OpenShift Container Platform pull secret. Must be base-64 encoded. Create the AgentClusterInstall custom resource: apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: # Only include the annotation if using OVN, otherwise omit the annotation annotations: agent-install.openshift.io/install-config-overrides: '{"networking":{"networkType":"OVNKubernetes"}}' name: <cluster_name> namespace: <cluster_name> spec: clusterDeploymentRef: name: <cluster_name> imageSetRef: name: <cluster_image_set> 1 networking: clusterNetwork: - cidr: <cluster_network_cidr> 2 hostPrefix: 23 machineNetwork: - cidr: <machine_network_cidr> 3 serviceNetwork: - <service_network_cidr> 4 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <public_key> 5 1 The name of the ClusterImageSet custom resource used to install OpenShift Container Platform on the bare-metal host. 2 A block of IPv4 or IPv6 addresses in CIDR notation used for communication among cluster nodes. 3 A block of IPv4 or IPv6 addresses in CIDR notation used for the target bare-metal host external communication. Also used to determine the API and Ingress VIP addresses when provisioning DU single-node clusters. 4 A block of IPv4 or IPv6 addresses in CIDR notation used for cluster services internal communication. 5 Entered as plain text. You can use the public key to SSH into the node after it has finished installing. Note If you want to configure a static IP for the managed cluster at this point, see the procedure in this document for configuring static IP addresses for managed clusters. Create the ClusterDeployment custom resource: apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <cluster_name> namespace: <cluster_name> spec: baseDomain: <base_domain> 1 clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: <cluster_name> version: v1beta1 clusterName: <cluster_name> platform: agentBareMetal: agentSelector: matchLabels: cluster-name: <cluster_name> pullSecretRef: name: assisted-deployment-pull-secret 1 The managed cluster's base domain. Create the KlusterletAddonConfig custom resource: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterName: <cluster_name> clusterNamespace: <cluster_name> clusterLabels: cloud: auto-detect vendor: auto-detect applicationManager: enabled: true certPolicyController: enabled: false iamPolicyController: enabled: false policyController: enabled: true searchCollector: enabled: false 1 1 Set to true to enable KlusterletAddonConfig or false to disable the KlusterletAddonConfig. Keep searchCollector disabled. Create the ManagedCluster custom resource: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> spec: hubAcceptsClient: true Create the InfraEnv custom resource: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterRef: name: <cluster_name> namespace: <cluster_name> sshAuthorizedKey: <public_key> 1 agentLabels: 2 location: "<label-name>" pullSecretRef: name: assisted-deployment-pull-secret 1 Entered as plain text. You can use the public key to SSH into the target bare-metal host when it boots from the ISO. 2 Sets a label to match. The labels apply when the agents boot. Create the BareMetalHost custom resource: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <cluster_name> namespace: <cluster_name> annotations: inspect.metal3.io: disabled labels: infraenvs.agent-install.openshift.io: "<cluster_name>" spec: bootMode: "UEFI" bmc: address: <bmc_address> 1 disableCertificateVerification: true credentialsName: <cluster_name>-bmc-secret bootMACAddress: <mac_address> 2 automatedCleaningMode: disabled online: true 1 The baseboard management console address of the installation ISO on the target bare-metal host. 2 The MAC address of the target bare-metal host. Optionally, you can add bmac.agent-install.openshift.io/hostname: <host-name> as an annotation to set the managed cluster's hostname. If you don't add the annotation, the hostname will default to either a hostname from the DHCP server or local host. After you have created the custom resources, push the entire directory of generated custom resources to the Git repository you created for storing the custom resources. step To provision additional clusters, repeat this procedure for each cluster. 19.13.1. Configuring static IP addresses for managed clusters Optionally, after creating the AgentClusterInstall custom resource, you can configure static IP addresses for the managed clusters. Note You must create this custom resource before creating the ClusterDeployment custom resource. Prerequisites Deploy and configure the AgentClusterInstall custom resource. Procedure Create a NMStateConfig custom resource: apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: <cluster_name> namespace: <cluster_name> labels: sno-cluster-<cluster-name>: <cluster_name> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true address: - ip: <ip_address> 1 prefix-length: <public_network_prefix> 2 dhcp: false dns-resolver: config: server: - <dns_resolver> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <gateway> 4 -hop-interface: eth0 table-id: 254 interfaces: - name: "eth0" 5 macAddress: <mac_address> 6 1 The static IP address of the target bare-metal host. 2 The static IP address's subnet prefix for the target bare-metal host. 3 The DNS server for the target bare-metal host. 4 The gateway for the target bare-metal host. 5 Must match the name specified in the interfaces section. 6 The mac address of the interface. When creating the BareMetalHost custom resource, ensure that one of its mac addresses matches a mac address in the NMStateConfig target bare-metal host. When creating the InfraEnv custom resource, reference the label from the NMStateConfig custom resource in the InfraEnv custom resource: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterRef: name: <cluster_name> namespace: <cluster_name> sshAuthorizedKey: <public_key> agentLabels: 1 location: "<label-name>" pullSecretRef: name: assisted-deployment-pull-secret nmStateConfigLabelSelector: matchLabels: sno-cluster-<cluster-name>: <cluster_name> # Match this label 1 Sets a label to match. The labels apply when the agents boot. 19.13.2. Automated Discovery image ISO process for provisioning clusters After you create the custom resources, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target machine. When the ISO file successfully boots on the target machine it reports the hardware information of the target machine. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process finishes when the Agent custom resource is created on the hub for the managed cluster. 19.13.3. Checking the managed cluster status Ensure that cluster provisioning was successful by checking the cluster status. Prerequisites All of the custom resources have been configured and provisioned, and the Agent custom resource is created on the hub for the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster True indicates the managed cluster is ready. Check the agent status: USD oc get agent -n <cluster_name> Use the describe command to provide an in-depth description of the agent's condition. Statuses to be aware of include BackendError , InputError , ValidationsFailing , InstallationFailed , and AgentIsConnected . These statuses are relevant to the Agent and AgentClusterInstall custom resources. USD oc describe agent -n <cluster_name> Check the cluster provisioning status: USD oc get agentclusterinstall -n <cluster_name> Use the describe command to provide an in-depth description of the cluster provisioning status: USD oc describe agentclusterinstall -n <cluster_name> Check the status of the managed cluster's add-on services: USD oc get managedclusteraddon -n <cluster_name> Retrieve the authentication information of the kubeconfig file for the managed cluster: USD oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig 19.13.4. Configuring a managed cluster for a disconnected environment After you have completed the preceding procedure, follow these steps to configure the managed cluster for a disconnected environment. Prerequisites A disconnected installation of Red Hat Advanced Cluster Management (RHACM) 2.3. Host the rootfs and iso images on an HTTPD server. Procedure Create a ConfigMap containing the mirror registry config: apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: assisted-installer labels: app: assisted-service data: ca-bundle.crt: <certificate> 1 registries.conf: | 2 unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] location = <mirror_registry_url> 3 insecure = false mirror-by-digest-only = true 1 The mirror registry's certificate used when creating the mirror registry. 2 The configuration for the mirror registry. 3 The URL of the mirror registry. This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below: Example output apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: 'assisted-installer-mirror-config' osImages: - openshiftVersion: <ocp_version> rootfs: <rootfs_url> 1 url: <iso_url> 2 1 2 Must match the URLs of the HTTPD server. For disconnected installations, you must deploy an NTP clock that is reachable through the disconnected network. You can do this by configuring chrony to act as server, editing the /etc/chrony.conf file, and adding the following allowed IPv6 range: # Allow NTP client access from local network. #allow 192.168.0.0/16 local stratum 10 bindcmdaddress :: allow 2620:52:0:1310::/64 19.13.5. Configuring IPv6 addresses for a disconnected environment Optionally, when you are creating the AgentClusterInstall custom resource, you can configure IPv6 addresses for the managed clusters. Procedure In the AgentClusterInstall custom resource, modify the IP addresses in clusterNetwork and serviceNetwork for IPv6 addresses: apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: # Only include the annotation if using OVN, otherwise omit the annotation annotations: agent-install.openshift.io/install-config-overrides: '{"networking":{"networkType":"OVNKubernetes"}}' name: <cluster_name> namespace: <cluster_name> spec: clusterDeploymentRef: name: <cluster_name> imageSetRef: name: <cluster_image_set> networking: clusterNetwork: - cidr: "fd01::/48" hostPrefix: 64 machineNetwork: - cidr: <machine_network_cidr> serviceNetwork: - "fd02::/112" provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <public_key> Update the NMStateConfig custom resource with the IPv6 addresses you defined. 19.13.6. Troubleshooting the managed cluster Use this procedure to diagnose any installation issues that might occur with the managed clusters. Procedure Check the status of the managed cluster: USD oc get managedcluster Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h If the status in the AVAILABLE column is True , the managed cluster is being managed by the hub. If the status in the AVAILABLE column is Unknown , the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information. Check the AgentClusterInstall install status: USD oc get clusterdeployment -n <cluster_name> Example output NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h If the status in the INSTALLED column is false , the installation was unsuccessful. If the installation failed, enter the following command to review the status of the AgentClusterInstall resource: USD oc describe agentclusterinstall -n <cluster_name> <cluster_name> Resolve the errors and reset the cluster: Remove the cluster's managed cluster resource: USD oc delete managedcluster <cluster_name> Remove the cluster's namespace: USD oc delete namespace <cluster_name> This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the ManagedCluster CR deletion to complete before proceeding. Recreate the custom resources for the managed cluster. 19.14. Applying the RAN policies for monitoring cluster activity Zero touch provisioning (ZTP) uses Red Hat Advanced Cluster Management (RHACM) to apply the radio access network (RAN) policies using a policy-based governance approach to automatically monitor cluster activity. The policy generator (PolicyGen) is a Kustomize plugin that facilitates creating ACM policies from predefined custom resources. There are three main items: Policy Categorization, Source CR policy, and PolicyGenTemplate. PolicyGen relies on these to generate the policies and their placement bindings and rules. The following diagram shows how the RAN policy generator interacts with GitOps and ACM. RAN policies are categorized into three main groups: Common A policy that exists in the Common category is applied to all clusters to be represented by the site plan. Groups A policy that exists in the Groups category is applied to a group of clusters. Every group of clusters could have their own policies that exist under the Groups category. For example, Groups/group1 could have its own policies that are applied to the clusters belonging to group1 . Sites A policy that exists in the Sites category is applied to a specific cluster. Any cluster could have its own policies that exist in the Sites category. For example, Sites/cluster1 will have its own policies applied to cluster1 . The following diagram shows how policies are generated. 19.14.1. Applying source custom resource policies Source custom resource policies include the following: SR-IOV policies PTP policies Performance Add-on Operator policies MachineConfigPool policies SCTP policies You need to define the source custom resource that generates the ACM policy with consideration of possible overlay to its metadata or spec/data. For example, a common-namespace-policy contains a Namespace definition that exists in all managed clusters. This namespace is placed under the Common category and there are no changes for its spec or data across all clusters. Namespace policy example The following example shows the source custom resource for this namespace: apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator labels: openshift.io/run-level: "1" Example output The generated policy that applies this namespace includes the namespace as it is defined above without any change, as shown in this example: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: common-sriov-sub-ns-policy namespace: common-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: common-sriov-sub-ns-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/run-level: "1" name: openshift-sriov-network-operator SRIOV policy example The following example shows a SriovNetworkNodePolicy definition that exists in different clusters with a different specification for each cluster. The example also shows the source custom resource for the SriovNetworkNodePolicy : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp namespace: openshift-sriov-network-operator spec: # The USD tells the policy generator to overlay/remove the spec.item in the generated policy. deviceType: USDdeviceType isRdma: false nicSelector: pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/worker: "" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName Example output The SriovNetworkNodePolicy name and namespace are the same for all clusters, so both are defined in the source SriovNetworkNodePolicy . However, the generated policy requires the USDdeviceType , USDnumVfs , as input parameters in order to adjust the policy for each cluster. The generated policy is shown in this example: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: site-du-sno-1-sriov-nnp-mh-policy namespace: sites-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: site-du-sno-1-sriov-nnp-mh-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - ens7f0 nodeSelector: node-role.kubernetes.io/worker: "" numVfs: 8 resourceName: du_mh Note Defining the required input parameters as USDvalue , for example USDdeviceType , is not mandatory. The USD tells the policy generator to overlay or remove the item from the generated policy. Otherwise, the value does not change. 19.14.2. The PolicyGenTemplate The PolicyGenTemplate.yaml file is a Custom Resource Definition (CRD) that tells PolicyGen where to categorize the generated policies and which items need to be overlaid. The following example shows the PolicyGenTemplate.yaml file: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno" namespace: "group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: ConsoleOperatorDisable.yaml policyName: "console-policy" - fileName: ClusterLogging.yaml policyName: "cluster-log-policy" spec: curation: curator: schedule: "30 3 * * *" collection: logs: type: "fluentd" fluentd: {} The group-du-ranGen.yaml file defines a group of policies under a group named group-du . This file defines a MachineConfigPool worker-du that is used as the node selector for any other policy defined in sourceFiles . An ACM policy is generated for every source file that exists in sourceFiles . And, a single placement binding and placement rule is generated to apply the cluster selection rule for group-du policies. Using the source file PtpConfigSlave.yaml as an example, the PtpConfigSlave has a definition of a PtpConfig custom resource (CR). The generated policy for the PtpConfigSlave example is named group-du-ptp-config-policy . The PtpConfig CR defined in the generated group-du-ptp-config-policy is named du-ptp-slave . The spec defined in PtpConfigSlave.yaml is placed under du-ptp-slave along with the other spec items defined under the source file. The following example shows the group-du-ptp-config-policy : apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..... 19.14.3. Considerations when creating custom resource policies The custom resources used to create the ACM policies should be defined with consideration of possible overlay to its metadata and spec/data. For example, if the custom resource metadata.name does not change between clusters then you should set the metadata.name value in the custom resource file. If the custom resource will have multiple instances in the same cluster, then the custom resource metadata.name must be defined in the policy template file. In order to apply the node selector for a specific machine config pool, you have to set the node selector value as USDmcp in order to let the policy generator overlay the USDmcp value with the defined mcp in the policy template. Subscription source files do not change. 19.14.4. Generating RAN policies Prerequisites Install Kustomize Install the Kustomize Policy Generator plug-in Procedure Configure the kustomization.yaml file to reference the policyGenerator.yaml file. The following example shows the PolicyGenerator definition: apiVersion: policyGenerator/v1 kind: PolicyGenerator metadata: name: acm-policy namespace: acm-policy-generator # The arguments should be given and defined as below with same order --policyGenTempPath= --sourcePath= --outPath= --stdout --customResources argsOneLiner: ./ranPolicyGenTempExamples ./sourcePolicies ./out true false Where: policyGenTempPath is the path to the policyGenTemp files. sourcePath : is the path to the source policies. outPath : is the path to save the generated ACM policies. stdout : If true , prints the generated policies to the console. customResources : If true generates the CRs from the sourcePolicies files without ACM policies. Test PolicyGen by running the following commands: USD cd cnf-features-deploy/ztp/ztp-policy-generator/ USD XDG_CONFIG_HOME=./ kustomize build --enable-alpha-plugins An out directory is created with the expected policies, as shown in this example: out ├── common │ ├── common-log-sub-ns-policy.yaml │ ├── common-log-sub-oper-policy.yaml │ ├── common-log-sub-policy.yaml │ ├── common-pao-sub-catalog-policy.yaml │ ├── common-pao-sub-ns-policy.yaml │ ├── common-pao-sub-oper-policy.yaml │ ├── common-pao-sub-policy.yaml │ ├── common-policies-placementbinding.yaml │ ├── common-policies-placementrule.yaml │ ├── common-ptp-sub-ns-policy.yaml │ ├── common-ptp-sub-oper-policy.yaml │ ├── common-ptp-sub-policy.yaml │ ├── common-sriov-sub-ns-policy.yaml │ ├── common-sriov-sub-oper-policy.yaml │ └── common-sriov-sub-policy.yaml ├── groups │ ├── group-du │ │ ├── group-du-mc-chronyd-policy.yaml │ │ ├── group-du-mc-mount-ns-policy.yaml │ │ ├── group-du-mcp-du-policy.yaml │ │ ├── group-du-mc-sctp-policy.yaml │ │ ├── group-du-policies-placementbinding.yaml │ │ ├── group-du-policies-placementrule.yaml │ │ ├── group-du-ptp-config-policy.yaml │ │ └── group-du-sriov-operconfig-policy.yaml │ └── group-sno-du │ ├── group-du-sno-policies-placementbinding.yaml │ ├── group-du-sno-policies-placementrule.yaml │ ├── group-sno-du-console-policy.yaml │ ├── group-sno-du-log-forwarder-policy.yaml │ └── group-sno-du-log-policy.yaml └── sites └── site-du-sno-1 ├── site-du-sno-1-policies-placementbinding.yaml ├── site-du-sno-1-policies-placementrule.yaml ├── site-du-sno-1-sriov-nn-fh-policy.yaml ├── site-du-sno-1-sriov-nnp-mh-policy.yaml ├── site-du-sno-1-sriov-nw-fh-policy.yaml ├── site-du-sno-1-sriov-nw-mh-policy.yaml └── site-du-sno-1-.yaml The common policies are flat because they will be applied to all clusters. However, the groups and sites have subdirectories for each group and site as they will be applied to different clusters. 19.15. Cluster provisioning Zero touch provisioning (ZTP) provisions clusters using a layered approach. The base components consist of Red Hat Enterprise Linux CoreOS (RHCOS), the basic operating system for the cluster, and OpenShift Container Platform. After these components are installed, the worker node can join the existing cluster. When the node has joined the existing cluster, the 5G RAN profile Operators are applied. The following diagram illustrates this architecture. The following RAN Operators are deployed on every cluster: Machine Config Precision Time Protocol (PTP) Performance Addon Operator SR-IOV Local Storage Operator Logging Operator 19.15.1. Machine Config Operator The Machine Config Operator enables system definitions and low-level system settings such as workload partitioning, NTP, and SCTP. This Operator is installed with OpenShift Container Platform. A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance addons that encompass kernel args, kube config, huge pages allocation, and deployment of the realtime kernel (rt-kernel). The performance addons controller monitors changes in the MCP and updates the performance profile status accordingly. 19.15.2. Performance Addon Operator The Performance Addon Operator provides the ability to enable advanced node performance tunings on a set of nodes. OpenShift Container Platform provides a Performance Addon Operator to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify updating the kernel to rt-kernel , reserving CPUs for management workloads, and using CPUs for running the workloads. 19.15.3. SR-IOV Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. The SR-IOV Operator allows network interfaces to be virtual and shared at a device level with networking functions running within the cluster. The SR-IOV Network Operator adds the SriovOperatorConfig.sriovnetwork.openshift.io CustomResourceDefinition resource. The Operator automatically creates a SriovOperatorConfig custom resource named default in the openshift-sriov-network-operator namespace. The default custom resource contains the SR-IOV Network Operator configuration for your cluster. 19.15.4. Precision Time Protocol Operator The Precision Time Protocol (PTP) Operator is a protocol used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy. PTP support is divided between the kernel and user space. The clocks synchronized by PTP are organized in a master-worker hierarchy. The workers are synchronized to their masters, which may be workers to their own masters. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. When a clock has only one port, it can be master or worker, such a clock is called an ordinary clock (OC). A clock with multiple ports can be master on one port and worker on another, such a clock is called a boundary clock (BC). The top-level master is called the grandmaster clock, which can be synchronized by using a Global Positioning System (GPS) time source. By using a GPS-based time source, disparate networks can be synchronized with a high-degree of accuracy. 19.16. Creating ZTP custom resources for multiple managed clusters If you are installing multiple managed clusters, zero touch provisioning (ZTP) uses ArgoCD and SiteConfig to manage the processes that create the custom resources (CR) and generate and apply the policies for multiple clusters, in batches of no more than 100, using the GitOps approach. Installing and deploying the clusters is a two stage process, as shown here: 19.16.1. Prerequisites for deploying the ZTP pipeline OpenShift Container Platform cluster version 4.8 or higher and Red Hat GitOps Operator is installed. Red Hat Advanced Cluster Management (RHACM) version 2.3 or above is installed. For disconnected environments, make sure your source data Git repository and ztp-site-generator container image are accessible from the hub cluster. If you want additional custom content, such as extra install manifests or custom resources (CR) for policies, add them to the /usr/src/hook/ztp/source-crs/extra-manifest/ directory. Similarly, you can add additional configuration CRs, as referenced from a PolicyGenTemplate , to the /usr/src/hook/ztp/source-crs/ directory. Create a Containerfile that adds your additional manifests to the Red Hat provided image, for example: FROM <registry fqdn>/ztp-site-generator:latest 1 COPY myInstallManifest.yaml /usr/src/hook/ztp/source-crs/extra-manifest/ COPY mySourceCR.yaml /usr/src/hook/ztp/source-crs/ 1 <registry fqdn> must point to a registry containing the ztp-site-generator container image provided by Red Hat. Build a new container image that includes these additional files: USD> podman build Containerfile.example 19.16.2. Installing the GitOps ZTP pipeline The procedures in this section tell you how to complete the following tasks: Prepare the Git repository you need to host site configuration data. Configure the hub cluster for generating the required installation and policy custom resources (CR). Deploy the managed clusters using zero touch provisioning (ZTP). 19.16.2.1. Preparing the ZTP Git repository Create a Git repository for hosting site configuration data. The zero touch provisioning (ZTP) pipeline requires read access to this repository. Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate custom resources (CR). Add pre-sync.yaml and post-sync.yaml from resource-hook-example/<policygentemplates>/ to the path for the PolicyGenTemplate CRs. Add pre-sync.yaml and post-sync.yaml from resource-hook-example/<siteconfig>/ to the path for the SiteConfig CRs. Note If your hub cluster operates in a disconnected environment, you must update the image for all four pre and post sync hook CRs. Apply the policygentemplates.ran.openshift.io and siteconfigs.ran.openshift.io CR definitions. 19.16.2.2. Preparing the hub cluster for ZTP You can configure your hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CR) for each site based on a zero touch provisioning (ZTP) GitOps flow. Procedure Install the Red Hat OpenShift GitOps Operator on your hub cluster. Extract the administrator password for ArgoCD: USD oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d Prepare the ArgoCD pipeline configuration: Extract the ArgoCD deployment CRs from the ZTP site generator container using the latest container image version: USD mkdir ztp USD podman run --rm -v `pwd`/ztp:/mnt/ztp:Z registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.9.0-1 /bin/bash -c "cp -ar /usr/src/hook/ztp/* /mnt/ztp/" The remaining steps in this section relate to the ztp/gitops-subscriptions/argocd/ directory. Modify the source values of the two ArgoCD applications, deployment/clusters-app.yaml and deployment/policies-app.yaml with appropriate URL, targetRevision branch, and path values. The path values must match those used in your Git repository. Modify deployment/clusters-app.yaml : apiVersion: v1 kind: Namespace metadata: name: clusters-sub --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: clusters namespace: openshift-gitops spec: destination: server: https://kubernetes.default.svc namespace: clusters-sub project: default source: path: ztp/gitops-subscriptions/argocd/resource-hook-example/siteconfig 1 repoURL: https://github.com/openshift-kni/cnf-features-deploy 2 targetRevision: master 3 syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true 1 The ztp/gitops-subscriptions/argocd/ file path that contains the siteconfig CRs for the clusters. 2 The URL of the Git repository that contains the siteconfig custom resources that define site configuration for installing clusters. 3 The branch on the Git repository that contains the relevant site configuration data. Modify deployment/policies-app.yaml : apiVersion: v1 kind: Namespace metadata: name: policies-sub --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: policies namespace: openshift-gitops spec: destination: server: https://kubernetes.default.svc namespace: policies-sub project: default source: directory: recurse: true path: ztp/gitops-subscriptions/argocd/resource-hook-example/policygentemplates 1 repoURL: https://github.com/openshift-kni/cnf-features-deploy 2 targetRevision: master 3 syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true 1 The ztp/gitops-subscriptions/argocd/ file path that contains the policygentemplates CRs for the clusters. 2 The URL of the Git repository that contains the policygentemplates custom resources that specify configuration data for the site. 3 The branch on the Git repository that contains the relevant configuration data. To apply the pipeline configuration to your hub cluster, enter this command: USD oc apply -k ./deployment 19.16.3. Creating the site secrets Add the required secrets for the site to the hub cluster. These resources must be in a namespace with a name that matches the cluster name. Procedure Create a secret for authenticating to the site Baseboard Management Controller (BMC). Ensure the secret name matches the name used in the SiteConfig . In this example, the secret name is test-sno-bmh-secret : apiVersion: v1 kind: Secret metadata: name: test-sno-bmh-secret namespace: test-sno data: password: dGVtcA== username: cm9vdA== type: Opaque Create the pull secret for the site. The pull secret must contain all credentials necessary for installing OpenShift and all add-on Operators. In this example, the secret name is assisted-deployment-pull-secret : apiVersion: v1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: test-sno type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: <Your pull secret base64 encoded> Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. 19.16.4. Creating the SiteConfig custom resources ArgoCD acts as the engine for the GitOps method of site deployment. After completing a site plan that contains the required custom resources for the site installation, a policy generator creates the manifests and applies them to the hub cluster. Procedure Create one or more SiteConfig custom resources, site-config.yaml files, that contains the site-plan data for the clusters. For example: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "test-sno" namespace: "test-sno" spec: baseDomain: "clus2.t5g.lab.eng.bos.redhat.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.9" sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDB3dwhI5X0ZxGBb9VK7wclcPHLc8n7WAyKjTNInFjYNP9J+Zoc/ii+l3YbGUTuqilDwZN5rVIwBux2nUyVXDfaM5kPd9kACmxWtfEWTyVRootbrNWwRfKuC2h6cOd1IlcRBM1q6IzJ4d7+JVoltAxsabqLoCbK3svxaZoKAaK7jdGG030yvJzZaNM4PiTy39VQXXkCiMDmicxEBwZx1UsA8yWQsiOQ5brod9KQRXWAAST779gbvtgXR2L+MnVNROEHf1nEjZJwjwaHxoDQYHYKERxKRHlWFtmy5dNT6BbvOpJ2e5osDFPMEd41d2mUJTfxXiC1nvyjk9Irf8YJYnqJgBIxi0IxEllUKH7mTdKykHiPrDH5D2pRlp+Donl4n+sw6qoDc/3571O93+RQ6kUSAgAsvWiXrEfB/7kGgAa/BD5FeipkFrbSEpKPVu+gue1AQeJcz9BuLqdyPUQj2VUySkSg0FuGbG7fxkKeF1h3Sga7nuDOzRxck4I/8Z7FxMF/e8DmaBpgHAUIfxXnRqAImY9TyAZUEMT5ZPSvBRZNNmLbfex1n3NLcov/GEpQOqEYcjG5y57gJ60/av4oqjcVmgtaSOOAS0kZ3y9YDhjsaOcpmRYYijJn8URAH7NrW8EZsvAoF6GUt6xHq5T258c6xSYUm5L0iKvBqrOW9EjbLw== [email protected]" clusters: - clusterName: "test-sno" clusterType: "sno" clusterProfile: "du" clusterLabels: group-du-sno: "" common: true sites : "test-sno" clusterNetwork: - cidr: 1001:db9::/48 hostPrefix: 64 machineNetwork: - cidr: 2620:52:0:10e7::/64 serviceNetwork: - 1001:db7::/112 additionalNTPSources: - 2620:52:0:1310::1f6 nodes: - hostName: "test-sno.clus2.t5g.lab.eng.bos.redhat.com" bmcAddress: "idrac-virtualmedia+https://[2620:52::10e7:f602:70ff:fee4:f4e2]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "test-sno-bmh-secret" bmcDisableCertificateVerification: true 1 bootMACAddress: "0C:42:A1:8A:74:EC" bootMode: "UEFI" rootDeviceHints: hctl: '0:1:0' cpuset: "0-1,52-53" nodeNetwork: interfaces: - name: eno1 macAddress: "0C:42:A1:8A:74:EC" config: interfaces: - name: eno1 type: ethernet state: up macAddress: "0C:42:A1:8A:74:EC" ipv4: enabled: false ipv6: enabled: true address: - ip: 2620:52::10e7:e42:a1ff:fe8a:900 prefix-length: 64 dns-resolver: config: search: - clus2.t5g.lab.eng.bos.redhat.com server: - 2620:52:0:1310::1f6 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 2620:52:0:10e7::fc table-id: 254 1 If you are using UEFI SecureBoot , add this line to prevent failures due to invalid or local certificates. Save the files and push them to the zero touch provisioning (ZTP) Git repository accessible from the hub cluster and defined as a source repository of the ArgoCD application. ArgoCD detects that the application is out of sync. Upon sync, either automatic or manual, ArgoCD synchronizes the PolicyGenTemplate to the hub cluster and launches the associated resource hooks. These hooks are responsible for generating the policy wrapped configuration CRs that apply to the spoke cluster. The resource hooks convert the site definitions to installation custom resources and applies them to the hub cluster: Namespace - Unique per site AgentClusterInstall BareMetalHost ClusterDeployment InfraEnv NMStateConfig ExtraManifestsConfigMap - Extra manifests. The additional manifests include workload partitioning, chronyd, mountpoint hiding, sctp enablement, and more. ManagedCluster KlusterletAddonConfig Red Hat Advanced Cluster Management (RHACM) (ACM) deploys the hub cluster. 19.16.5. Creating the PolicyGenTemplates Use the following procedure to create the PolicyGenTemplates you will need for generating policies in your Git repository for the hub cluster. Procedure Create the PolicyGenTemplates and save them to the zero touch provisioning (ZTP) Git repository accessible from the hub cluster and defined as a source repository of the ArgoCD application. ArgoCD detects that the application is out of sync. Upon sync, either automatic or manual, ArgoCD applies the new PolicyGenTemplate to the hub cluster and launches the associated resource hooks. These hooks are responsible for generating the policy wrapped configuration CRs that apply to the spoke cluster and perform the following actions: Create the Red Hat Advanced Cluster Management (RHACM) (ACM) policies according to the basic distributed unit (DU) profile and required customizations. Apply the generated policies to the hub cluster. The ZTP process creates policies that direct ACM to apply the desired configuration to the cluster nodes. 19.16.6. Checking the installation status The ArgoCD pipeline detects the SiteConfig and PolicyGenTemplate custom resources (CRs) in the Git repository and syncs them to the hub cluster. In the process, it generates installation and policy CRs and applies them to the hub cluster. You can monitor the progress of this synchronization in the ArgoCD dashboard. Procedure Monitor the progress of cluster installation using the following commands: USD export CLUSTER=<cluster_name> USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' Use the Red Hat Advanced Cluster Management (RHACM) (ACM) dashboard to monitor the progress of policy reconciliation. 19.16.7. Site cleanup To remove a site and the associated installation and policy custom resources (CRs), remove the SiteConfig and site-specific PolicyGenTemplate CRs from the Git repository. The pipeline hooks remove the generated CRs. Note Before removing a SiteConfig CR you must detach the cluster from ACM. 19.16.7.1. Removing the ArgoCD pipeline Use the following procedure if you want to remove the ArgoCD pipeline and all generated artifacts. Procedure Detach all clusters from ACM. Delete all SiteConfig and PolicyGenTemplate custom resources (CRs) from your Git repository. Delete the following namespaces: All policy namespaces: USD oc get policy -A clusters-sub policies-sub Process the directory using the Kustomize tool: USD oc delete -k cnf-features-deploy/ztp/gitops-subscriptions/argocd/deployment 19.17. Troubleshooting GitOps ZTP As noted, the ArgoCD pipeline synchronizes the SiteConfig and PolicyGenTemplate custom resources (CR) from the Git repository to the hub cluster. During this process, post-sync hooks create the installation and policy CRs that are also applied to the hub cluster. Use the following procedures to troubleshoot issues that might occur in this process. 19.17.1. Validating the generation of installation CRs SiteConfig applies Installation custom resources (CR) to the hub cluster in a namespace with the name matching the site name. To check the status, enter the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following procedure to troubleshoot the ArgoCD pipeline flow from SiteConfig to the installation CRs. Procedure Check the synchronization of the SiteConfig to the hub cluster using either of the following commands: USD oc get siteconfig -A or USD oc get siteconfig -n clusters-sub If the SiteConfig is missing, one of the following situations has occurred: The clusters application failed to synchronize the CR from the Git repository to the hub. Use the following command to verify this: USD oc describe -n openshift-gitops application clusters Check for Status: Synced and that the Revision: is the SHA of the commit you pushed to the subscribed repository. The pre-sync hook failed, possibly due to a failure to pull the container image. Check the ArgoCD dashboard for the status of the pre-sync job in the clusters application. Verify the post hook job ran: USD oc describe job -n clusters-sub siteconfig-post If successful, the returned output indicates succeeded: 1 . If the job fails, ArgoCD retries it. In some cases, the first pass will fail and the second pass will indicate that the job passed. Check for errors in the post hook job: USD oc get pod -n clusters-sub Note the name of the siteconfig-post-xxxxx pod: USD oc logs -n clusters-sub siteconfig-post-xxxxx If the logs indicate errors, correct the conditions and push the corrected SiteConfig or PolicyGenTemplate to the Git repository. 19.17.2. Validating the generation of policy CRs ArgoCD generates the policy custom resources (CRs) in the same namespace as the PolicyGenTemplate from which they were created. The same troubleshooting flow applies to all policy CRs generated from PolicyGenTemplates regardless of whether they are common, group, or site based. To check the status of the policy CRs, enter the following commands: USD export NS=<namespace> USD oc get policy -n USDNS The returned output displays the expected set of policy wrapped CRs. If no object is returned, use the following procedure to troubleshoot the ArgoCD pipeline flow from SiteConfig to the policy CRs. Procedure Check the synchronization of the PolicyGenTemplate to the hub cluster: USD oc get policygentemplate -A or USD oc get policygentemplate -n USDNS If the PolicyGenTemplate is not synchronized, one of the following situations has occurred: The clusters application failed to synchronize the CR from the Git repository to the hub. Use the following command to verify this: USD oc describe -n openshift-gitops application clusters Check for Status: Synced and that the Revision: is the SHA of the commit you pushed to the subscribed repository. The pre-sync hook failed, possibly due to a failure to pull the container image. Check the ArgoCD dashboard for the status of the pre-sync job in the clusters application. Ensure the policies were copied to the cluster namespace. When ACM recognizes that policies apply to a ManagedCluster , ACM applies the policy CR objects to the cluster namespace: USD oc get policy -n <cluster_name> ACM copies all applicable common, group, and site policies here. The policy names are <policyNamespace> and <policyName> . Check the placement rule for any policies not copied to the cluster namespace. The matchSelector in the PlacementRule for those policies should match the labels on the ManagedCluster : USD oc get placementrule -n USDNS Make a note of the PlacementRule name for the missing common, group, or site policy: oc get placementrule -n USDNS <placmentRuleName> -o yaml The status decisions value should include your cluster name. The key value of the matchSelector in the spec should match the labels on your managed cluster. Check the labels on ManagedCluster : oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq Example apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: group-test1-policies-placementrules namespace: group-test1-policies spec: clusterSelector: matchExpressions: - key: group-test1 operator: In values: - "" status: decisions: - clusterName: <cluster_name> clusterNamespace: <cluster_name> Ensure all policies are compliant: oc get policy -n USDCLUSTER If the Namespace, OperatorGroup, and Subscription policies are compliant but the Operator configuration policies are not it is likely that the Operators did not install.
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "export ISO_IMAGE_NAME=<iso_image_name> 1", "export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1", "export OCP_VERSION=<ocp_version> 1", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}", "wget http://USD(hostname)/USD{ISO_IMAGE_NAME}", "Saving to: rhcos-4.9.0-fc.1-x86_64-live.x86_64.iso rhcos-4.9.0-fc.1-x86_64- 11%[====> ] 10.01M 4.71MB/s", "oc patch hiveconfig hive --type merge -p '{\"spec\":{\"targetNamespace\":\"hive\",\"logLevel\":\"debug\",\"featureGates\":{\"custom\":{\"enabled\":[\"AlphaAgentInstallStrategy\"]},\"featureSet\":\"Custom\"}}}'", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"watchAllNamespaces\": true }}'", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> 1 filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> 2 osImages: 3 - openshiftVersion: \"<ocp_version>\" 4 version: \"<ocp_release_version>\" 5 url: \"<iso_url>\" 6 rootFSUrl: \"<root_fs_url>\" 7 cpuArchitecture: \"x86_64\"", "oc create -f agent_service_config.yaml", "agentserviceconfig.agent-install.openshift.io/agent created", "console-openshift-console.apps.hub-cluster.internal.domain.com api.hub-cluster.internal.domain.com", "console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com api.sno-managed-cluster-1.internal.domain.com", "adm release mirror -a <pull_secret.json> --from=quay.io/openshift-release-dev/ocp-release:{{ mirror_version_spoke_release }} --to={{ provisioner_cluster_registry }}/ocp4 --to-release-image={{ provisioner_cluster_registry }}/ocp4:{{ mirror_version_spoke_release }}", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.9.0-rc.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.9.0-x86_64 2", "apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2", "apiVersion: v1 data: password: <bmc_password> 1 username: <bmc_username> 2 kind: Secret metadata: name: <cluster_name>-bmc-secret namespace: <cluster_name> type: Opaque", "apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: <cluster_name> type: kubernetes.io/dockerconfigjson", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: # Only include the annotation if using OVN, otherwise omit the annotation annotations: agent-install.openshift.io/install-config-overrides: '{\"networking\":{\"networkType\":\"OVNKubernetes\"}}' name: <cluster_name> namespace: <cluster_name> spec: clusterDeploymentRef: name: <cluster_name> imageSetRef: name: <cluster_image_set> 1 networking: clusterNetwork: - cidr: <cluster_network_cidr> 2 hostPrefix: 23 machineNetwork: - cidr: <machine_network_cidr> 3 serviceNetwork: - <service_network_cidr> 4 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <public_key> 5", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <cluster_name> namespace: <cluster_name> spec: baseDomain: <base_domain> 1 clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: <cluster_name> version: v1beta1 clusterName: <cluster_name> platform: agentBareMetal: agentSelector: matchLabels: cluster-name: <cluster_name> pullSecretRef: name: assisted-deployment-pull-secret", "apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterName: <cluster_name> clusterNamespace: <cluster_name> clusterLabels: cloud: auto-detect vendor: auto-detect applicationManager: enabled: true certPolicyController: enabled: false iamPolicyController: enabled: false policyController: enabled: true searchCollector: enabled: false 1", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> spec: hubAcceptsClient: true", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterRef: name: <cluster_name> namespace: <cluster_name> sshAuthorizedKey: <public_key> 1 agentLabels: 2 location: \"<label-name>\" pullSecretRef: name: assisted-deployment-pull-secret", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <cluster_name> namespace: <cluster_name> annotations: inspect.metal3.io: disabled labels: infraenvs.agent-install.openshift.io: \"<cluster_name>\" spec: bootMode: \"UEFI\" bmc: address: <bmc_address> 1 disableCertificateVerification: true credentialsName: <cluster_name>-bmc-secret bootMACAddress: <mac_address> 2 automatedCleaningMode: disabled online: true", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: <cluster_name> namespace: <cluster_name> labels: sno-cluster-<cluster-name>: <cluster_name> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true address: - ip: <ip_address> 1 prefix-length: <public_network_prefix> 2 dhcp: false dns-resolver: config: server: - <dns_resolver> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <gateway> 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" 5 macAddress: <mac_address> 6", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterRef: name: <cluster_name> namespace: <cluster_name> sshAuthorizedKey: <public_key> agentLabels: 1 location: \"<label-name>\" pullSecretRef: name: assisted-deployment-pull-secret nmStateConfigLabelSelector: matchLabels: sno-cluster-<cluster-name>: <cluster_name> # Match this label", "oc get managedcluster", "oc get agent -n <cluster_name>", "oc describe agent -n <cluster_name>", "oc get agentclusterinstall -n <cluster_name>", "oc describe agentclusterinstall -n <cluster_name>", "oc get managedclusteraddon -n <cluster_name>", "oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig", "apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: assisted-installer labels: app: assisted-service data: ca-bundle.crt: <certificate> 1 registries.conf: | 2 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = <mirror_registry_url> 3 insecure = false mirror-by-digest-only = true", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: 'assisted-installer-mirror-config' osImages: - openshiftVersion: <ocp_version> rootfs: <rootfs_url> 1 url: <iso_url> 2", "Allow NTP client access from local network. #allow 192.168.0.0/16 local stratum 10 bindcmdaddress :: allow 2620:52:0:1310::/64", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: # Only include the annotation if using OVN, otherwise omit the annotation annotations: agent-install.openshift.io/install-config-overrides: '{\"networking\":{\"networkType\":\"OVNKubernetes\"}}' name: <cluster_name> namespace: <cluster_name> spec: clusterDeploymentRef: name: <cluster_name> imageSetRef: name: <cluster_image_set> networking: clusterNetwork: - cidr: \"fd01::/48\" hostPrefix: 64 machineNetwork: - cidr: <machine_network_cidr> serviceNetwork: - \"fd02::/112\" provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <public_key>", "oc get managedcluster", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h", "oc get clusterdeployment -n <cluster_name>", "NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h", "oc describe agentclusterinstall -n <cluster_name> <cluster_name>", "oc delete managedcluster <cluster_name>", "oc delete namespace <cluster_name>", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator labels: openshift.io/run-level: \"1\"", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: common-sriov-sub-ns-policy namespace: common-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: common-sriov-sub-ns-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/run-level: \"1\" name: openshift-sriov-network-operator", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp namespace: openshift-sriov-network-operator spec: # The USD tells the policy generator to overlay/remove the spec.item in the generated policy. deviceType: USDdeviceType isRdma: false nicSelector: pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: site-du-sno-1-sriov-nnp-mh-policy namespace: sites-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: site-du-sno-1-sriov-nnp-mh-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - ens7f0 nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 8 resourceName: du_mh", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: ConsoleOperatorDisable.yaml policyName: \"console-policy\" - fileName: ClusterLogging.yaml policyName: \"cluster-log-policy\" spec: curation: curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..", "apiVersion: policyGenerator/v1 kind: PolicyGenerator metadata: name: acm-policy namespace: acm-policy-generator The arguments should be given and defined as below with same order --policyGenTempPath= --sourcePath= --outPath= --stdout --customResources argsOneLiner: ./ranPolicyGenTempExamples ./sourcePolicies ./out true false", "cd cnf-features-deploy/ztp/ztp-policy-generator/", "XDG_CONFIG_HOME=./ kustomize build --enable-alpha-plugins", "out ├── common │ ├── common-log-sub-ns-policy.yaml │ ├── common-log-sub-oper-policy.yaml │ ├── common-log-sub-policy.yaml │ ├── common-pao-sub-catalog-policy.yaml │ ├── common-pao-sub-ns-policy.yaml │ ├── common-pao-sub-oper-policy.yaml │ ├── common-pao-sub-policy.yaml │ ├── common-policies-placementbinding.yaml │ ├── common-policies-placementrule.yaml │ ├── common-ptp-sub-ns-policy.yaml │ ├── common-ptp-sub-oper-policy.yaml │ ├── common-ptp-sub-policy.yaml │ ├── common-sriov-sub-ns-policy.yaml │ ├── common-sriov-sub-oper-policy.yaml │ └── common-sriov-sub-policy.yaml ├── groups │ ├── group-du │ │ ├── group-du-mc-chronyd-policy.yaml │ │ ├── group-du-mc-mount-ns-policy.yaml │ │ ├── group-du-mcp-du-policy.yaml │ │ ├── group-du-mc-sctp-policy.yaml │ │ ├── group-du-policies-placementbinding.yaml │ │ ├── group-du-policies-placementrule.yaml │ │ ├── group-du-ptp-config-policy.yaml │ │ └── group-du-sriov-operconfig-policy.yaml │ └── group-sno-du │ ├── group-du-sno-policies-placementbinding.yaml │ ├── group-du-sno-policies-placementrule.yaml │ ├── group-sno-du-console-policy.yaml │ ├── group-sno-du-log-forwarder-policy.yaml │ └── group-sno-du-log-policy.yaml └── sites └── site-du-sno-1 ├── site-du-sno-1-policies-placementbinding.yaml ├── site-du-sno-1-policies-placementrule.yaml ├── site-du-sno-1-sriov-nn-fh-policy.yaml ├── site-du-sno-1-sriov-nnp-mh-policy.yaml ├── site-du-sno-1-sriov-nw-fh-policy.yaml ├── site-du-sno-1-sriov-nw-mh-policy.yaml └── site-du-sno-1-.yaml", "FROM <registry fqdn>/ztp-site-generator:latest 1 COPY myInstallManifest.yaml /usr/src/hook/ztp/source-crs/extra-manifest/ COPY mySourceCR.yaml /usr/src/hook/ztp/source-crs/", "USD> podman build Containerfile.example", "oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d", "mkdir ztp podman run --rm -v `pwd`/ztp:/mnt/ztp:Z registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.9.0-1 /bin/bash -c \"cp -ar /usr/src/hook/ztp/* /mnt/ztp/\"", "apiVersion: v1 kind: Namespace metadata: name: clusters-sub --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: clusters namespace: openshift-gitops spec: destination: server: https://kubernetes.default.svc namespace: clusters-sub project: default source: path: ztp/gitops-subscriptions/argocd/resource-hook-example/siteconfig 1 repoURL: https://github.com/openshift-kni/cnf-features-deploy 2 targetRevision: master 3 syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true", "apiVersion: v1 kind: Namespace metadata: name: policies-sub --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: policies namespace: openshift-gitops spec: destination: server: https://kubernetes.default.svc namespace: policies-sub project: default source: directory: recurse: true path: ztp/gitops-subscriptions/argocd/resource-hook-example/policygentemplates 1 repoURL: https://github.com/openshift-kni/cnf-features-deploy 2 targetRevision: master 3 syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true", "oc apply -k ./deployment", "apiVersion: v1 kind: Secret metadata: name: test-sno-bmh-secret namespace: test-sno data: password: dGVtcA== username: cm9vdA== type: Opaque", "apiVersion: v1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: test-sno type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: <Your pull secret base64 encoded>", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"test-sno\" namespace: \"test-sno\" spec: baseDomain: \"clus2.t5g.lab.eng.bos.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.9\" sshPublicKey: \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDB3dwhI5X0ZxGBb9VK7wclcPHLc8n7WAyKjTNInFjYNP9J+Zoc/ii+l3YbGUTuqilDwZN5rVIwBux2nUyVXDfaM5kPd9kACmxWtfEWTyVRootbrNWwRfKuC2h6cOd1IlcRBM1q6IzJ4d7+JVoltAxsabqLoCbK3svxaZoKAaK7jdGG030yvJzZaNM4PiTy39VQXXkCiMDmicxEBwZx1UsA8yWQsiOQ5brod9KQRXWAAST779gbvtgXR2L+MnVNROEHf1nEjZJwjwaHxoDQYHYKERxKRHlWFtmy5dNT6BbvOpJ2e5osDFPMEd41d2mUJTfxXiC1nvyjk9Irf8YJYnqJgBIxi0IxEllUKH7mTdKykHiPrDH5D2pRlp+Donl4n+sw6qoDc/3571O93+RQ6kUSAgAsvWiXrEfB/7kGgAa/BD5FeipkFrbSEpKPVu+gue1AQeJcz9BuLqdyPUQj2VUySkSg0FuGbG7fxkKeF1h3Sga7nuDOzRxck4I/8Z7FxMF/e8DmaBpgHAUIfxXnRqAImY9TyAZUEMT5ZPSvBRZNNmLbfex1n3NLcov/GEpQOqEYcjG5y57gJ60/av4oqjcVmgtaSOOAS0kZ3y9YDhjsaOcpmRYYijJn8URAH7NrW8EZsvAoF6GUt6xHq5T258c6xSYUm5L0iKvBqrOW9EjbLw== [email protected]\" clusters: - clusterName: \"test-sno\" clusterType: \"sno\" clusterProfile: \"du\" clusterLabels: group-du-sno: \"\" common: true sites : \"test-sno\" clusterNetwork: - cidr: 1001:db9::/48 hostPrefix: 64 machineNetwork: - cidr: 2620:52:0:10e7::/64 serviceNetwork: - 1001:db7::/112 additionalNTPSources: - 2620:52:0:1310::1f6 nodes: - hostName: \"test-sno.clus2.t5g.lab.eng.bos.redhat.com\" bmcAddress: \"idrac-virtualmedia+https://[2620:52::10e7:f602:70ff:fee4:f4e2]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"test-sno-bmh-secret\" bmcDisableCertificateVerification: true 1 bootMACAddress: \"0C:42:A1:8A:74:EC\" bootMode: \"UEFI\" rootDeviceHints: hctl: '0:1:0' cpuset: \"0-1,52-53\" nodeNetwork: interfaces: - name: eno1 macAddress: \"0C:42:A1:8A:74:EC\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"0C:42:A1:8A:74:EC\" ipv4: enabled: false ipv6: enabled: true address: - ip: 2620:52::10e7:e42:a1ff:fe8a:900 prefix-length: 64 dns-resolver: config: search: - clus2.t5g.lab.eng.bos.redhat.com server: - 2620:52:0:1310::1f6 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 2620:52:0:10e7::fc table-id: 254", "export CLUSTER=<cluster_name>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get policy -A", "oc delete -k cnf-features-deploy/ztp/gitops-subscriptions/argocd/deployment", "oc get AgentClusterInstall -n <cluster_name>", "oc get siteconfig -A", "oc get siteconfig -n clusters-sub", "oc describe -n openshift-gitops application clusters", "oc describe job -n clusters-sub siteconfig-post", "oc get pod -n clusters-sub", "oc logs -n clusters-sub siteconfig-post-xxxxx", "export NS=<namespace>", "oc get policy -n USDNS", "oc get policygentemplate -A", "oc get policygentemplate -n USDNS", "oc describe -n openshift-gitops application clusters", "oc get policy -n <cluster_name>", "oc get placementrule -n USDNS", "get placementrule -n USDNS <placmentRuleName> -o yaml", "get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: group-test1-policies-placementrules namespace: group-test1-policies spec: clusterSelector: matchExpressions: - key: group-test1 operator: In values: - \"\" status: decisions: - clusterName: <cluster_name> clusterNamespace: <cluster_name>", "get policy -n USDCLUSTER" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/ztp-deploying-disconnected
Chapter 15. Azure Storage Blob Sink
Chapter 15. Azure Storage Blob Sink Upload data to Azure Storage Blob. Important The Azure Storage Blob Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Kamelet expects the following headers to be set: file / ce-file : as the file name to upload If the header won't be set the exchange ID will be used as file name. 15.1. Configuration Options The following table summarizes the configuration options available for the azure-storage-blob-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The Azure Storage Blob access Key. string accountName * Account Name The Azure Storage Blob account name. string containerName * Container Name The Azure Storage Blob container name. string credentialType Credential Type Determines the credential strategy to adopt. Possible values are SHARED_ACCOUNT_KEY, SHARED_KEY_CREDENTIAL and AZURE_IDENTITY string "SHARED_ACCOUNT_KEY" operation Operation Name The operation to perform. string "uploadBlockBlob" Note Fields marked with an asterisk (*) are mandatory. 15.2. Dependencies At runtime, the azure-storage-blob-sink Kamelet relies upon the presence of the following dependencies: camel:azure-storage-blob camel:kamelet 15.3. Usage This section describes how you can use the azure-storage-blob-sink . 15.3.1. Knative Sink You can use the azure-storage-blob-sink Kamelet as a Knative sink by binding it to a Knative object. azure-storage-blob-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" 15.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 15.3.1.2. Procedure for using the cluster CLI Save the azure-storage-blob-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-blob-sink-binding.yaml 15.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel azure-storage-blob-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name" This command creates the KameletBinding in the current namespace on the cluster. 15.3.2. Kafka Sink You can use the azure-storage-blob-sink Kamelet as a Kafka sink by binding it to a Kafka topic. azure-storage-blob-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" 15.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 15.3.2.2. Procedure for using the cluster CLI Save the azure-storage-blob-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-blob-sink-binding.yaml 15.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name" This command creates the KameletBinding in the current namespace on the cluster. 15.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/azure-storage-blob-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\"", "apply -f azure-storage-blob-sink-binding.yaml", "kamel bind channel:mychannel azure-storage-blob-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.containerName=The Container Name\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\"", "apply -f azure-storage-blob-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.containerName=The Container Name\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/azure-storage-blob-sink
Chapter 3. Managing roles on the Ceph dashboard
Chapter 3. Managing roles on the Ceph dashboard As a storage administrator, you can create, edit, clone, and delete roles on the dashboard. By default, there are eight system roles. You can create custom roles and give permissions to those roles. These roles can be assigned to users based on the requirements. This section covers the following administrative tasks: User roles and permissions on the Ceph dashboard . Creating roles on the Ceph dashboard . Editing roles on the Ceph dashboard . Cloning roles on the Ceph dashboard . Deleting roles on the Ceph dashboard . 3.1. User roles and permissions on the Ceph dashboard User accounts are associated with a set of roles that define the specific dashboard functionality which can be accessed. The Red Hat Ceph Storage dashboard functionality or modules are grouped within a security scope. Security scopes are predefined and static. The current available security scopes on the Red Hat Ceph Storage dashboard are: cephfs : Includes all features related to CephFS management. config-opt : Includes all features related to management of Ceph configuration options. dashboard-settings : Allows to edit the dashboard settings. grafana : Include all features related to Grafana proxy. hosts : Includes all features related to the Hosts menu entry. log : Includes all features related to Ceph logs management. manager : Includes all features related to Ceph manager management. monitor : Includes all features related to Ceph monitor management. nfs-ganesha : Includes all features related to NFS-Ganesha management. osd : Includes all features related to OSD management. pool : Includes all features related to pool management. prometheus : Include all features related to Prometheus alert management. rbd-image : Includes all features related to RBD image management. rbd-mirroring : Includes all features related to RBD mirroring management. rgw : Includes all features related to Ceph object gateway (RGW) management. A role specifies a set of mappings between a security scope and a set of permissions. There are four types of permissions : Read Create Update Delete The list of system roles are: administrator : Allows full permissions for all security scopes. block-manager : Allows full permissions for RBD-image and RBD-mirroring scopes. cephfs-manager : Allows full permissions for the Ceph file system scope. cluster-manager : Allows full permissions for the hosts, OSDs, monitor, manager, and config-opt scopes. ganesha-manager : Allows full permissions for the NFS-Ganesha scope. pool-manager : Allows full permissions for the pool scope. read-only : Allows read permission for all security scopes except the dashboard settings and config-opt scopes. rgw-manager : Allows full permissions for the Ceph object gateway scope. For example, you need to provide rgw-manager access to the users for all Ceph object gateway operations. Additional Resources For creating users on the Ceph dashboard, see Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide . For creating roles on the Ceph dashboard, see Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide . 3.2. Creating roles on the Ceph dashboard You can create custom roles on the dashboard and these roles can be assigned to users based on their roles. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click Create . In the Create Role window, set the Name , Description , and select the Permissions for this role, and then click the Create Role button. In this example, the user assigned with ganesha-manager and rgw-manager roles can manage all NFS-Ganesha gateway and Ceph object gateway operations. You get a notification that the role was created successfully. Click on the Expand/Collapse icon of the row to view the details and permissions given to the roles. Additional Resources See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 3.3. Editing roles on the Ceph dashboard The dashboard allows you to edit roles on the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. A role is created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click the role you want to edit. In the Edit Role window, edit the parameters, and then click Edit Role . You get a notification that the role was updated successfully. Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 3.4. Cloning roles on the Ceph dashboard When you want to assign additional permissions to existing roles, you can clone the system roles and edit it on the Red Hat Ceph Storage Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Roles are created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click the role you want to clone. Select Clone from the Edit drop-down menu. In the Clone Role dialog box, enter the details for the role, and then click Clone Role . Once you clone the role, you can customize the permissions as per the requirements. Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 3.5. Deleting roles on the Ceph dashboard You can delete the custom roles that you have created on the Red Hat Ceph Storage dashboard. Note You cannot delete the system roles of the Ceph Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. A custom role is created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then select User management . On the Roles tab, click the role you want to delete and select Delete from the action drop-down. In the Delete Role notification, select Yes, I am sure and click Delete Role . Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/management-of-roles-on-the-ceph-dashboard
Chapter 19. Clusters at the network far edge
Chapter 19. Clusters at the network far edge 19.1. Challenges of the network far edge Edge computing presents complex challenges when managing many sites in geographically displaced locations. Use GitOps Zero Touch Provisioning (ZTP) to provision and manage sites at the far edge of the network. 19.1.1. Overcoming the challenges of the network far edge Today, service providers want to deploy their infrastructure at the edge of the network. This presents significant challenges: How do you handle deployments of many edge sites in parallel? What happens when you need to deploy sites in disconnected environments? How do you manage the lifecycle of large fleets of clusters? GitOps Zero Touch Provisioning (ZTP) and GitOps meets these challenges by allowing you to provision remote edge sites at scale with declarative site definitions and configurations for bare-metal equipment. Template or overlay configurations install OpenShift Container Platform features that are required for CNF workloads. The full lifecycle of installation and upgrades is handled through the GitOps ZTP pipeline. GitOps ZTP uses GitOps for infrastructure deployments. With GitOps, you use declarative YAML files and other defined patterns stored in Git repositories. Red Hat Advanced Cluster Management (RHACM) uses your Git repositories to drive the deployment of your infrastructure. GitOps provides traceability, role-based access control (RBAC), and a single source of truth for the desired state of each site. Scalability issues are addressed by Git methodologies and event driven operations through webhooks. You start the GitOps ZTP workflow by creating declarative site definition and configuration custom resources (CRs) that the GitOps ZTP pipeline delivers to the edge nodes. The following diagram shows how GitOps ZTP works within the far edge framework. 19.1.2. Using GitOps ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management (RHACM) manages clusters in a hub-and-spoke architecture, where a single hub cluster manages many spoke clusters. Hub clusters running RHACM provision and deploy the managed clusters by using GitOps Zero Touch Provisioning (ZTP) and the assisted service that is deployed when you install RHACM. The assisted service handles provisioning of OpenShift Container Platform on single node clusters, three-node clusters, or standard clusters running on bare metal. A high-level overview of using GitOps ZTP to provision and maintain bare-metal hosts with OpenShift Container Platform is as follows: A hub cluster running RHACM manages an OpenShift image registry that mirrors the OpenShift Container Platform release images. RHACM uses the OpenShift image registry to provision the managed clusters. You manage the bare-metal hosts in a YAML format inventory file, versioned in a Git repository. You make the hosts ready for provisioning as managed clusters, and use RHACM and the assisted service to install the bare-metal hosts on site. Installing and deploying the clusters is a two-stage process, involving an initial installation phase, and a subsequent configuration phase. The following diagram illustrates this workflow: 19.1.3. Installing managed clusters with SiteConfig resources and RHACM GitOps Zero Touch Provisioning (ZTP) uses SiteConfig custom resources (CRs) in a Git repository to manage the processes that install OpenShift Container Platform clusters. The SiteConfig CR contains cluster-specific parameters required for installation. It has options for applying select configuration CRs during installation including user defined extra manifests. The GitOps ZTP plugin processes SiteConfig CRs to generate a collection of CRs on the hub cluster. This triggers the assisted service in Red Hat Advanced Cluster Management (RHACM) to install OpenShift Container Platform on the bare-metal host. You can find installation status and error messages in these CRs on the hub cluster. You can provision single clusters manually or in batches with GitOps ZTP: Provisioning a single cluster Create a single SiteConfig CR and related installation and configuration CRs for the cluster, and apply them in the hub cluster to begin cluster provisioning. This is a good way to test your CRs before deploying on a larger scale. Provisioning many clusters Install managed clusters in batches of up to 400 by defining SiteConfig and related CRs in a Git repository. ArgoCD uses the SiteConfig CRs to deploy the sites. The RHACM policy generator creates the manifests and applies them to the hub cluster. This starts the cluster provisioning process. 19.1.4. Configuring managed clusters with policies and PolicyGenTemplate resources GitOps Zero Touch Provisioning (ZTP) uses Red Hat Advanced Cluster Management (RHACM) to configure clusters by using a policy-based governance approach to applying the configuration. The policy generator or PolicyGen is a plugin for the GitOps Operator that enables the creation of RHACM policies from a concise template. The tool can combine multiple CRs into a single policy, and you can generate multiple policies that apply to various subsets of clusters in your fleet. Note For scalability and to reduce the complexity of managing configurations across the fleet of clusters, use configuration CRs with as much commonality as possible. Where possible, apply configuration CRs using a fleet-wide common policy. The preference is to create logical groupings of clusters to manage as much of the remaining configurations as possible under a group policy. When a configuration is unique to an individual site, use RHACM templating on the hub cluster to inject the site-specific data into a common or group policy. Alternatively, apply an individual site policy for the site. The following diagram shows how the policy generator interacts with GitOps and RHACM in the configuration phase of cluster deployment. For large fleets of clusters, it is typical for there to be a high-level of consistency in the configuration of those clusters. The following recommended structuring of policies combines configuration CRs to meet several goals: Describe common configurations once and apply to the fleet. Minimize the number of maintained and managed policies. Support flexibility in common configurations for cluster variants. Table 19.1. Recommended PolicyGenTemplate policy categories Policy category Description Common A policy that exists in the common category is applied to all clusters in the fleet. Use common PolicyGenTemplate CRs to apply common installation settings across all cluster types. Groups A policy that exists in the groups category is applied to a group of clusters in the fleet. Use group PolicyGenTemplate CRs to manage specific aspects of single-node, three-node, and standard cluster installations. Cluster groups can also follow geographic region, hardware variant, etc. Sites A policy that exists in the sites category is applied to a specific cluster site. Any cluster can have its own specific policies maintained. Additional resources For more information about extracting the reference SiteConfig and PolicyGenTemplate CRs from the ztp-site-generate container image, see Preparing the ZTP Git repository . 19.2. Preparing the hub cluster for ZTP To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts. 19.2.1. Telco RAN 4.13 validated solution software versions The Red Hat Telco Radio Access Network (RAN) version 4.13 solution has been validated using the following Red Hat software products. Table 19.2. Telco RAN 4.13 validated solution software Product Software version Hub cluster OpenShift Container Platform version 4.13 GitOps ZTP plugin 4.11, 4.12, or 4.13 Red Hat Advanced Cluster Management (RHACM) 2.7 Red Hat OpenShift GitOps 1.9, 1.10 Topology Aware Lifecycle Manager (TALM) 4.11, 4.12, or 4.13 19.2.2. Recommended hub cluster specifications and managed cluster limits for GitOps ZTP With GitOps Zero Touch Provisioning (ZTP), you can manage thousands of clusters in geographically dispersed regions and networks. The Red Hat Performance and Scale lab successfully created and managed 3500 virtual single-node OpenShift clusters with a reduced DU profile from a single Red Hat Advanced Cluster Management (RHACM) hub cluster in a lab environment. In real-world situations, the scaling limits for the number of clusters that you can manage will vary depending on various factors affecting the hub cluster. For example: Hub cluster resources Available hub cluster host resources (CPU, memory, storage) are an important factor in determining how many clusters the hub cluster can manage. The more resources allocated to the hub cluster, the more managed clusters it can accommodate. Hub cluster storage The hub cluster host storage IOPS rating and whether the hub cluster hosts use NVMe storage can affect hub cluster performance and the number of clusters it can manage. Network bandwidth and latency Slow or high-latency network connections between the hub cluster and managed clusters can impact how the hub cluster manages multiple clusters. Managed cluster size and complexity The size and complexity of the managed clusters also affects the capacity of the hub cluster. Larger managed clusters with more nodes, namespaces, and resources require additional processing and management resources. Similarly, clusters with complex configurations such as the RAN DU profile or diverse workloads can require more resources from the hub cluster. Number of managed policies The number of policies managed by the hub cluster scaled over the number of managed clusters bound to those policies is an important factor that determines how many clusters can be managed. Monitoring and management workloads RHACM continuously monitors and manages the managed clusters. The number and complexity of monitoring and management workloads running on the hub cluster can affect its capacity. Intensive monitoring or frequent reconciliation operations can require additional resources, potentially limiting the number of manageable clusters. RHACM version and configuration Different versions of RHACM can have varying performance characteristics and resource requirements. Additionally, the configuration settings of RHACM, such as the number of concurrent reconciliations or the frequency of health checks, can affect the managed cluster capacity of the hub cluster. Use the following representative configuration and network specifications to develop your own Hub cluster and network specifications. Important The following guidelines are based on internal lab benchmark testing only and do not represent a complete real-world host specification. Table 19.3. Representative three-node hub cluster machine specifications Requirement Description Server hardware 3 x Dell PowerEdge R650 rack servers NVMe hard disks 50 GB disk for /var/lib/etcd 2.9 TB disk for /var/lib/containers SSD hard disks 1 SSD split into 15 200GB thin-provisioned logical volumes provisioned as PV CRs 1 SSD serving as an extra large PV resource Number of applied DU profile policies 5 Important The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing. Table 19.4. Simulated lab environment network specifications Specification Description Round-trip time (RTT) latency 50 ms Packet loss 0.02% packet loss Network bandwidth limit 20 Mbps Additional resources Creating and managing single-node OpenShift clusters with RHACM 19.2.3. Installing GitOps ZTP in a disconnected environment Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured a disconnected mirror registry for use in the cluster. Note The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry. Procedure Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment . Install GitOps and TALM in the hub cluster. Additional resources Installing OpenShift GitOps Installing TALM Mirroring an Operator catalog 19.2.4. Adding RHCOS ISO and RootFS images to the disconnected mirror host Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images. Prerequisites Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type. Procedure Log in to the mirror host. Obtain the RHCOS ISO and RootFS images from mirror.openshift.com , for example: Export the required image names and OpenShift Container Platform version as environment variables: USD export ISO_IMAGE_NAME=<iso_image_name> 1 USD export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1 USD export OCP_VERSION=<ocp_version> 1 1 ISO image name, for example, rhcos-4.13.1-x86_64-live.x86_64.iso 1 RootFS image name, for example, rhcos-4.13.1-x86_64-live-rootfs.x86_64.img 1 OpenShift Container Platform version, for example, 4.13.1 Download the required images: USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME} USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME} Verification steps Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example: USD wget http://USD(hostname)/USD{ISO_IMAGE_NAME} Example output Saving to: rhcos-4.13.1-x86_64-live.x86_64.iso rhcos-4.13.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s Additional resources Creating a mirror registry Mirroring images for a disconnected installation 19.2.5. Enabling the assisted service Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning resource to watch all namespaces and to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have RHACM with MultiClusterHub enabled. Procedure Enable the Provisioning resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the Central Infrastructure Management service . Update the AgentServiceConfig CR by running the following command: USD oc edit AgentServiceConfig Add the following entry to the items.spec.osImages field in the CR: - cpuArchitecture: x86_64 openshiftVersion: "4.13" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso where: <host> Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server. <path> Is the path to the image on the target mirror registry. Save and quit the editor to apply the changes. 19.2.6. Configuring the hub cluster to use a disconnected mirror registry You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment. Prerequisites You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.8 installed. You have hosted the rootfs and iso images on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository . Warning If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Procedure Create a ConfigMap containing the mirror registry config: apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "quay.io/example-repository" 4 mirror-by-digest-only = true [[registry.mirror]] location = "mirror1.registry.corp.com:5000/example-repository" 5 1 The ConfigMap namespace must be set to multicluster-engine . 2 The mirror registry's certificate that is used when creating the mirror registry. 3 The configuration file for the mirror registry. The mirror registry configuration adds mirror information to the /etc/containers/registries.conf file in the discovery image. The mirror information is stored in the imageContentSources section of the install-config.yaml file when the information is passed to the installation program. The Assisted Service pod that runs on the hub cluster fetches the container images from the configured mirror registry. 4 The URL of the mirror registry. You must use the URL from the imageContentSources section by running the oc adm release mirror command when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section. 5 The registries defined in the registries.conf file must be scoped by repository, not by registry. In this example, both the quay.io/example-repository and the mirror1.registry.corp.com:5000/example-repository repositories are scoped by the example-repository repository. This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below: Example output apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> url: <iso_url> 3 1 Set the AgentServiceConfig namespace to multicluster-engine to match the ConfigMap namespace 2 Set mirrorRegistryRef.name to match the definition specified in the related ConfigMap CR 3 Set the URL for the ISO hosted on the httpd server Important A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network. Additional resources Mirroring the OpenShift Container Platform image repository 19.2.7. Configuring the hub cluster to use unauthenticated registries You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images. Prerequisites You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster. You have installed the OpenShift Container Platform CLI (oc). You have logged in as a user with cluster-admin privileges. You have configured an unauthenticated registry for use with the hub cluster. Procedure Update the AgentServiceConfig custom resource (CR) by running the following command: USD oc edit AgentServiceConfig agent Add the unauthenticatedRegistries field in the CR: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com ... Unauthenticated registries are listed under spec.unauthenticatedRegistries in the AgentServiceConfig resource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation. assisted-service validates the pull secret by making sure it contains the authentication information for every image registry used for installation. Note Mirror registries are automatically added to the ignore list and do not need to be added under spec.unauthenticatedRegistries . Specifying the PUBLIC_CONTAINER_REGISTRIES environment variable in the ConfigMap overrides the default values with the specified value. The PUBLIC_CONTAINER_REGISTRIES defaults are quay.io and registry.svc.ci.openshift.org . Verification Verify that you can access the newly added registry from the hub cluster by running the following commands: Open a debug shell prompt to the hub cluster: USD oc debug node/<node_name> Test access to the unauthenticated registry by running the following command: sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry> where: <unauthenticated_registry> Is the new registry, for example, unauthenticated-image-registry.openshift-image-registry.svc:5000 . Example output Login Succeeded! 19.2.8. Configuring the hub cluster with ArgoCD You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps Zero Touch Provisioning (ZTP). Note Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs. Prerequisites You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed. You have extracted the reference deployment from the GitOps ZTP plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the out/argocd/deployment directory referenced in the following procedure. Procedure Prepare the ArgoCD pipeline configuration: Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository". Configure access to the repository using the ArgoCD UI. Under Settings configure the following: Repositories - Add the connection information. The URL must end in .git , for example, https://repo.example.com/repo.git and credentials. Certificates - Add the public certificate for the repository, if needed. Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml and out/argocd/deployment/policies-app.yaml , based on your Git repository: Update the URL to point to the Git repository. The URL ends with .git , for example, https://repo.example.com/repo.git . The targetRevision indicates which Git repository branch to monitor. path specifies the path to the SiteConfig and PolicyGenTemplate CRs, respectively. To install the GitOps ZTP plugin you must patch the ArgoCD instance in the hub cluster by using the patch file previously extracted into the out/argocd/deployment/ directory. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. To disable this feature, apply the following patch to disable and remove the relevant hub cluster and managed cluster pods that are responsible for this add-on. USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by using the following command: USD oc apply -k out/argocd/deployment 19.2.9. Preparing the GitOps ZTP site configuration repository Before you can use the GitOps Zero Touch Provisioning (ZTP) pipeline, you need to prepare the Git repository to host the site configuration data. Prerequisites You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs). You have deployed the managed clusters using GitOps ZTP. Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs. Export the argocd directory from the ztp-site-generate container image using the following commands: USD podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 USD mkdir -p ./out USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./out Check that the out directory contains the following subdirectories: out/extra-manifest contains the source CR files that SiteConfig uses to generate extra manifest configMap . out/source-crs contains the source CR files that PolicyGenTemplate uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. out/argocd/deployment contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. out/argocd/example contains the examples for SiteConfig and PolicyGenTemplate files that represent the recommended configuration. The directory structure under out/argocd/example serves as a reference for the structure and content of your Git repository. The example includes SiteConfig and PolicyGenTemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using. The following example describes a set of CRs for a network of single-node clusters: example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml Keep SiteConfig and PolicyGenTemplate CRs in separate directories. Both the SiteConfig and PolicyGenTemplate directories must contain a kustomization.yaml file that explicitly includes the files in that directory. This directory structure and the kustomization.yaml files must be committed and pushed to your Git repository. The initial push to Git should include the kustomization.yaml files. The SiteConfig ( example-sno.yaml ) and PolicyGenTemplate ( common-ranGen.yaml , group-du-sno*.yaml , and example-sno-site.yaml ) files can be omitted and pushed at a later time as required when deploying a site. The KlusterletAddonConfigOverride.yaml file is only required if one or more SiteConfig CRs which make reference to it are committed and pushed to Git. See example-sno.yaml for an example of how this is used. 19.3. Installing managed clusters with RHACM and SiteConfig resources You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment. 19.3.1. GitOps ZTP and Topology Aware Lifecycle Manager GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM. Inform policies By default, GitOps ZTP creates all policies with a remediation action of inform . These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the created inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. Automatic creation of ClusterGroupUpgrade CRs To automate the initial configuration of newly deployed clusters, TALM monitors the state of all ManagedCluster CRs on the hub cluster. Any ManagedCluster CR that does not have a ztp-done label applied, including newly created ManagedCluster CRs, causes the TALM to automatically create a ClusterGroupUpgrade CR with the following characteristics: The ClusterGroupUpgrade CR is created and enabled in the ztp-install namespace. ClusterGroupUpgrade CR has the same name as the ManagedCluster CR. The cluster selector includes only the cluster associated with that ManagedCluster CR. The set of managed policies includes all policies that RHACM has bound to the cluster at the time the ClusterGroupUpgrade is created. Pre-caching is disabled. Timeout set to 4 hours (240 minutes). The automatic creation of an enabled ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a ClusterGroupUpgrade CR for any ManagedCluster without the ztp-done label allows a failed GitOps ZTP installation to be restarted by simply deleting the ClusterGroupUpgrade CR for the cluster. Waves Each policy generated from a PolicyGenTemplate CR includes a ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated ClusterGroupUpgrade CR. Note All CRs in the same policy must have the same setting for the ztp-deploy-wave annotation. The default value of this annotation for each CR can be overridden in the PolicyGenTemplate . The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime. The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account. Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves. To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image: USD grep -r "ztp-deploy-wave" out/source-crs Phase labels The ClusterGroupUpgrade CR is automatically created and includes directives to annotate the ManagedCluster CR with labels at the start and end of the GitOps ZTP process. When GitOps ZTP configuration postinstallation commences, the ManagedCluster has the ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the ztp-running label and apply the ztp-done label. For deployments that make use of the informDuValidator policy, the ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. The ztp-done label affects automatic ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster. Linked CRs The automatically created ClusterGroupUpgrade CR has the owner reference set as the ManagedCluster from which it was derived. This reference ensures that deleting the ManagedCluster CR causes the instance of the ClusterGroupUpgrade to be deleted along with any supporting resources. 19.3.2. Overview of deploying managed clusters with GitOps ZTP Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters. The deployment of the clusters includes: Installing the host operating system (RHCOS) on a blank server Deploying OpenShift Container Platform Creating cluster policies and site subscriptions Making the necessary network configurations to the server operating system Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV Overview of the managed site installation process After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target host. When the ISO file successfully boots on the target host it reports the host hardware information to RHACM. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster. Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads . 19.3.3. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 19.3.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.13, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create the InfraEnv CR and edit the spec.kernelArguments specification to configure kernel arguments. Save the following YAML in an InfraEnv-example.yaml file: Note The InfraEnv CR in this example uses template syntax such as {{ .Cluster.ClusterName }} that is populated based on values in the SiteConfig CR. The SiteConfig CR automatically populates values for these templates during deployment. Do not edit the templates manually. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}" 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Commit the InfraEnv-example.yaml CR to the same location in your Git repository that has the SiteConfig CR and push your changes. The following example shows a sample Git repository structure: ~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ... Edit the spec.clusters.crTemplates specification in the SiteConfig CR to reference the InfraEnv-example.yaml CR in your Git repository: clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml" When you are ready to deploy your cluster by committing and pushing the SiteConfig CR, the build pipeline uses the custom InfraEnv-example CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 19.3.5. Deploying a managed cluster with SiteConfig and GitOps ZTP Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information. Note When you create the source repository, ensure that you patch the ArgoCD application with the argocd/deployment/argocd-openshift-gitops-patch.json patch-file that you extract from the ztp-site-generate container. See "Configuring the hub cluster with ArgoCD". To be ready for provisioning managed clusters, you require the following for each bare-metal host: Network connectivity Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host. Baseboard Management Controller (BMC) details GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the ManagedCluster CRs on the hub cluster based on the SiteConfig CR in your site Git repo. You create individual BMCSecret CRs for each host manually. Procedure Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in out/argocd/example/siteconfig/example-sno.yaml , the cluster name and namespace is example-sno . Export the cluster namespace by running the following command: USD export CLUSTERNS=example-sno Create the namespace: USD oc create namespace USDCLUSTERNS Create pull secret and BMC Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information. Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. Create a SiteConfig CR for your cluster in your local clone of the Git repository: Choose the appropriate example for your CR from the out/argocd/example/siteconfig/ folder. The folder includes example files for single node, three-node, and standard clusters: example-sno.yaml example-3node.yaml example-standard.yaml Change the cluster and host details in the example file to match the type of cluster you want. For example: Example single-node OpenShift SiteConfig CR # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" cpuPartitioningMode: AllNodes pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "marketplace", "NodeTuning" ] } } clusterLabels: common: true group-du-sno: "" sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" rootDeviceHints: wwn: "0x11111000000asd123" # diskPartition: # - device: /dev/disk/by-id/wwn-0x11111000000asd123 # match rootDeviceHints # partitions: # - mount_point: /var/imageregistry # size: 102500 # start: 344844 ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x11111000000asd123", "wipeTable": false, "partitions": [ { "sizeMiB": 16, "label": "httpevent1", "startMiB": 350000 }, { "sizeMiB": 16, "label": "httpevent2", "startMiB": 350016 } ] } ], "filesystem": [ { "device": "/dev/disk/by-partlabel/httpevent1", "format": "xfs", "wipeFilesystem": true }, { "device": "/dev/disk/by-partlabel/httpevent2", "format": "xfs", "wipeFilesystem": true } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Note For more information about BMC addressing, see the "Additional resources" section. The installConfigOverrides and ignitionConfigOverride fields are expanded in the example for ease of readability. You can inspect the default set of extra-manifest MachineConfig CRs in out/argocd/extra-manifest . It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, sno-extra-manifest/ , and add your custom manifest CRs to this directory. If your SiteConfig.yaml refers to this directory in the extraManifestPath field, any CRs in this referenced directory are appended to the default set of extra manifests. Enabling the crun OCI container runtime For optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters. Enable crun in a ContainerRuntimeConfig CR as an additional Day 0 install-time manifest to avoid the cluster having to reboot. The enable-crun-master.yaml and enable-crun-worker.yaml CR files are in the out/source-crs/optional-extra-manifest/ folder that you can extract from the ztp-site-generate container. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline". Add the SiteConfig CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/siteconfig/kustomization.yaml . Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Additional resources Single-node OpenShift SiteConfig CR installation reference 19.3.5.1. Single-node OpenShift SiteConfig CR installation reference Table 19.5. SiteConfig CR installation options for single-node OpenShift clusters SiteConfig CR field Description spec.cpuPartitioningMode Configure workload partitioning by setting the value for cpuPartitioningMode to AllNodes . To complete the configuration, specify the isolated and reserved CPUs in the PerformanceProfile CR. Note Configuring workload partitioning by using the cpuPartitioningMode field in the SiteConfig CR is a Tech Preview feature in OpenShift Container Platform 4.13. metadata.name Set name to assisted-deployment-pull-secret and create the assisted-deployment-pull-secret CR in the same namespace as the SiteConfig CR. spec.clusterImageSetNameRef Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . installConfigOverrides Set the installConfigOverrides field to enable or disable optional components prior to cluster installation. Important Use the reference configuration as specified in the example SiteConfig CR. Adding additional components back into the system might require additional reserved CPU capacity. spec.clusters.clusterImageSetNameRef Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at site level. spec.clusters.clusterLabels Configure cluster labels to correspond to the bindingRules field in the PolicyGenTemplate CRs that you define. For example, policygentemplates/common-ranGen.yaml applies to all clusters with common: true set, policygentemplates/group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. spec.clusters.crTemplates.KlusterletAddonConfig Optional. Set KlusterletAddonConfig to KlusterletAddonConfigOverride.yaml to override the default `KlusterletAddonConfig that is created for the cluster. spec.clusters.nodes.hostName For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. spec.clusters.nodes.bmcCredentialsName Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. spec.clusters.nodes.bootMode Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. spec.clusters.nodes.rootDeviceHints Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . Values using a by-path scheme are preferred. For a detailed list of stable identifiers, see the "About root device hints" section. spec.clusters.nodes.cpuset Configure cpuset to match the value that you set in the cluster PerformanceProfile CR spec.cpu.reserved field for workload partitioning. For example, cpuset: "0-1,40-41" . spec.clusters.nodes.ignitionConfigOverride Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. spec.clusters.nodes.nodeNetwork Configure the network settings for the node. spec.clusters.nodes.nodeNetwork.config.interfaces.ipv6 Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Additional resources Customizing extra installation manifests in the GitOps ZTP pipeline Preparing the GitOps ZTP site configuration repository Configuring the hub cluster with ArgoCD Signalling ZTP cluster deployment completion with validator inform policies Creating the managed bare-metal host secrets BMC addressing About root device hints 19.3.6. Monitoring managed cluster installation progress The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure When the synchronization is complete, the installation generally proceeds as follows: The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands: Export the cluster name: USD export CLUSTER=<clusterName> Query the AgentClusterInstall CR for the managed cluster: USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq Get the installation events for the cluster: USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' 19.3.7. Troubleshooting GitOps ZTP by validating the installation CRs The ArgoCD pipeline uses the SiteConfig and PolicyGenTemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Check that the installation CRs were created by using the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from SiteConfig files to the installation CRs. Verify that the ManagedCluster CR was generated using the SiteConfig CR on the hub cluster: USD oc get managedcluster If the ManagedCluster is missing, check if the clusters application failed to synchronize the files from the Git repository to the hub cluster: USD oc describe -n openshift-gitops application clusters Check for the Status.Conditions field to view the error logs for the managed cluster. For example, setting an invalid value for extraManifestPath: in the SiteConfig CR raises the following error: Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonError Check the Status.Sync field. If there are log errors, the Status.Sync field could indicate an Unknown error: Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown 19.3.8. Troubleshooting GitOps ZTP virtual media booting on Supermicro servers SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Disable TLS in the Provisioning resource by running the following command: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' Continue the steps to deploy your single-node OpenShift cluster. 19.3.9. Removing a managed cluster site from the GitOps ZTP pipeline You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove a site and the associated CRs by removing the associated SiteConfig and PolicyGenTemplate files from the kustomization.yaml file. When you run the GitOps ZTP pipeline again, the generated CRs are removed. Optional: If you want to permanently remove a site, you should also remove the SiteConfig and site-specific PolicyGenTemplate files from the Git repository. Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the SiteConfig and site-specific PolicyGenTemplate CRs in the Git repository. Additional resources For information about removing a cluster, see Removing a cluster from management . 19.3.10. Removing obsolete content from the GitOps ZTP pipeline If a change to the PolicyGenTemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove the affected PolicyGenTemplate files from the Git repository, commit and push to the remote repository. Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster. Add the updated PolicyGenTemplate files back to the Git repository, and then commit and push to the remote repository. Note Removing GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster. Optional: As an alternative, after making changes to PolicyGenTemplate CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command: USD oc delete policy -n <namespace> <policy_name> 19.3.11. Tearing down the GitOps ZTP pipeline You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster. Delete the kustomization.yaml file in the deployment directory using the following command: USD oc delete -k out/argocd/deployment Commit and push your changes to the site repository. 19.4. Configuring managed clusters with policies and PolicyGenTemplate resources Applied policy custom resources (CRs) configure the managed clusters that you provision. You can customize how Red Hat Advanced Cluster Management (RHACM) uses PolicyGenTemplate CRs to generate the applied policy CRs. 19.4.1. About the PolicyGenTemplate CRD The PolicyGenTemplate custom resource definition (CRD) tells the PolicyGen policy generator what custom resources (CRs) to include in the cluster configuration, how to combine the CRs into the generated policies, and what items in those CRs need to be updated with overlay content. The following example shows a PolicyGenTemplate CR ( common-du-ranGen.yaml ) extracted from the ztp-site-generate reference container. The common-du-ranGen.yaml file defines two Red Hat Advanced Cluster Management (RHACM) policies. The polices manage a collection of configuration CRs, one for each unique value of policyName in the CR. common-du-ranGen.yaml creates a single placement binding and a placement rule to bind the policies to clusters based on the labels listed in the bindingRules section. Example PolicyGenTemplate CR - common-du-ranGen.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "common" namespace: "ztp-common" spec: bindingRules: common: "true" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: "subscriptions-policy" - fileName: SriovSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: SriovSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: SriovOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: PtpSubscription.yaml policyName: "subscriptions-policy" - fileName: PtpSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: PtpSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: PtpOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: ClusterLogNS.yaml policyName: "subscriptions-policy" - fileName: ClusterLogOperGroup.yaml policyName: "subscriptions-policy" - fileName: ClusterLogSubscription.yaml policyName: "subscriptions-policy" - fileName: ClusterLogOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: StorageNS.yaml policyName: "subscriptions-policy" - fileName: StorageOperGroup.yaml policyName: "subscriptions-policy" - fileName: StorageSubscription.yaml policyName: "subscriptions-policy" - fileName: StorageOperatorStatus.yaml policyName: "subscriptions-policy" - fileName: ReduceMonitoringFootprint.yaml policyName: "config-policy" - fileName: OperatorHub.yaml 3 policyName: "config-policy" - fileName: DefaultCatsrc.yaml 4 policyName: "config-policy" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: "config-policy" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io 1 common: "true" applies the policies to all clusters with this label. 2 Files listed under sourceFiles create the Operator policies for installed clusters. 3 OperatorHub.yaml configures the OperatorHub for the disconnected registry. 4 DefaultCatsrc.yaml configures the catalog source for the disconnected registry. 5 policyName: "config-policy" configures Operator subscriptions. The OperatorHub CR disables the default and this CR replaces redhat-operators with a CatalogSource CR that points to the disconnected registry. A PolicyGenTemplate CR can be constructed with any number of included CRs. Apply the following example CR in the hub cluster to generate a policy containing a single CR: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno" namespace: "ztp-group" spec: bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f0" ptp4lOpts: "-2 -s --summary_interval -4" phc2sysOpts: "-a -r -n 24" Using the source file PtpConfigSlave.yaml as an example, the file defines a PtpConfig CR. The generated policy for the PtpConfigSlave example is named group-du-sno-config-policy . The PtpConfig CR defined in the generated group-du-sno-config-policy is named du-ptp-slave . The spec defined in PtpConfigSlave.yaml is placed under du-ptp-slave along with the other spec items defined under the source file. The following example shows the group-du-sno-config-policy CR: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..... 19.4.2. Recommendations when customizing PolicyGenTemplate CRs Consider the following best practices when customizing site configuration PolicyGenTemplate custom resources (CRs): Use as few policies as are necessary. Using fewer policies requires less resources. Each additional policy creates overhead for the hub cluster and the deployed managed cluster. CRs are combined into policies based on the policyName field in the PolicyGenTemplate CR. CRs in the same PolicyGenTemplate which have the same value for policyName are managed under a single policy. In disconnected environments, use a single catalog source for all Operators by configuring the registry as a single index containing all Operators. Each additional CatalogSource CR on the managed clusters increases CPU usage. MachineConfig CRs should be included as extraManifests in the SiteConfig CR so that they are applied during installation. This can reduce the overall time taken until the cluster is ready to deploy applications. PolicyGenTemplates should override the channel field to explicitly identify the desired version. This ensures that changes in the source CR during upgrades does not update the generated subscription. Additional resources For recommendations about scaling clusters with RHACM, see Performance and scalability . Note When managing large numbers of spoke clusters on the hub cluster, minimize the number of policies to reduce resource consumption. Grouping multiple configuration CRs into a single or limited number of policies is one way to reduce the overall number of policies on the hub cluster. When using the common, group, and site hierarchy of policies for managing site configuration, it is especially important to combine site-specific configuration into a single policy. 19.4.3. PolicyGenTemplate CRs for RAN deployments Use PolicyGenTemplate (PGT) custom resources (CRs) to customize the configuration applied to the cluster by using the GitOps Zero Touch Provisioning (ZTP) pipeline. The PGT CR allows you to generate one or more policies to manage the set of configuration CRs on your fleet of clusters. The PGT identifies the set of managed CRs, bundles them into policies, builds the policy wrapping around those CRs, and associates the policies with clusters by using label binding rules. The reference configuration, obtained from the GitOps ZTP container, is designed to provide a set of critical features and node tuning settings that ensure the cluster can support the stringent performance and resource utilization constraints typical of RAN (Radio Access Network) Distributed Unit (DU) applications. Changes or omissions from the baseline configuration can affect feature availability, performance, and resource utilization. Use the reference PolicyGenTemplate CRs as the basis to create a hierarchy of configuration files tailored to your specific site requirements. The baseline PolicyGenTemplate CRs that are defined for RAN DU cluster configuration can be extracted from the GitOps ZTP ztp-site-generate container. See "Preparing the GitOps ZTP site configuration repository" for further details. The PolicyGenTemplate CRs can be found in the ./out/argocd/example/policygentemplates folder. The reference architecture has common, group, and site-specific configuration CRs. Each PolicyGenTemplate CR refers to other CRs that can be found in the ./out/source-crs folder. The PolicyGenTemplate CRs relevant to RAN cluster configuration are described below. Variants are provided for the group PolicyGenTemplate CRs to account for differences in single-node, three-node compact, and standard cluster configurations. Similarly, site-specific configuration variants are provided for single-node clusters and multi-node (compact or standard) clusters. Use the group and site-specific configuration variants that are relevant for your deployment. Table 19.6. PolicyGenTemplate CRs for RAN deployments PolicyGenTemplate CR Description example-multinode-site.yaml Contains a set of CRs that get applied to multi-node clusters. These CRs configure SR-IOV features typical for RAN installations. example-sno-site.yaml Contains a set of CRs that get applied to single-node OpenShift clusters. These CRs configure SR-IOV features typical for RAN installations. common-ranGen.yaml Contains a set of common RAN CRs that get applied to all clusters. These CRs subscribe to a set of operators providing cluster features typical for RAN as well as baseline cluster tuning. group-du-3node-ranGen.yaml Contains the RAN policies for three-node clusters only. group-du-sno-ranGen.yaml Contains the RAN policies for single-node clusters only. group-du-standard-ranGen.yaml Contains the RAN policies for standard three control-plane clusters. group-du-3node-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for three-node clusters. group-du-standard-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for standard clusters. group-du-sno-validator-ranGen.yaml PolicyGenTemplate CR used to generate the various policies required for single-node OpenShift clusters. Additional resources Preparing the GitOps ZTP site configuration repository 19.4.4. Customizing a managed cluster with PolicyGenTemplate CRs Use the following procedure to customize the policies that get applied to the managed cluster that you provision using the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a PolicyGenTemplate CR for site-specific configuration CRs. Choose the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, example-sno-site.yaml or example-multinode-site.yaml . Change the bindingRules field in the example file to match the site-specific label included in the SiteConfig CR. In the example SiteConfig file, the site-specific label is sites: example-sno . Note Ensure that the labels defined in your PolicyGenTemplate bindingRules field correspond to the labels that are defined in the related managed clusters SiteConfig CR. Change the content in the example file to match the desired configuration. Optional: Create a PolicyGenTemplate CR for any common configuration CRs that apply to the entire fleet of clusters. Select the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, common-ranGen.yaml . Change the content in the example file to match the desired configuration. Optional: Create a PolicyGenTemplate CR for any group configuration CRs that apply to the certain groups of clusters in the fleet. Ensure that the content of the overlaid spec files matches your desired end state. As a reference, the out/source-crs directory contains the full list of source-crs available to be included and overlaid by your PolicyGenTemplate templates. Note Depending on the specific requirements of your clusters, you might need more than a single group policy per cluster type, especially considering that the example group policies each have a single PerformancePolicy.yaml file that can only be shared across a set of clusters if those clusters consist of identical hardware configurations. Select the appropriate example for your CR from the out/argocd/example/policygentemplates folder, for example, group-du-sno-ranGen.yaml . Change the content in the example file to match the desired configuration. Optional. Create a validator inform policy PolicyGenTemplate CR to signal when the GitOps ZTP installation and configuration of the deployed cluster is complete. For more information, see "Creating a validator inform policy". Define all the policy namespaces in a YAML file similar to the example out/argocd/example/policygentemplates/ns.yaml file. Important Do not include the Namespace CR in the same file with the PolicyGenTemplate CR. Add the PolicyGenTemplate CRs and Namespace CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/policygentemplates/kustomization.yaml . Commit the PolicyGenTemplate CRs, Namespace CR, and associated kustomization.yaml file in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. You can push the changes to the SiteConfig CR and the PolicyGenTemplate CR simultaneously. Additional resources Signalling ZTP cluster deployment completion with validator inform policies 19.4.5. Monitoring managed cluster policy deployment progress The ArgoCD pipeline uses PolicyGenTemplate CRs in Git to generate the RHACM policies and then sync them to the hub cluster. You can monitor the progress of the managed cluster policy synchronization after the assisted service installs OpenShift Container Platform on the managed cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure The Topology Aware Lifecycle Manager (TALM) applies the configuration policies that are bound to the cluster. After the cluster installation is complete and the cluster becomes Ready , a ClusterGroupUpgrade CR corresponding to this cluster, with a list of ordered policies defined by the ran.openshift.io/ztp-deploy-wave annotations , is automatically created by the TALM. The cluster's policies are applied in the order listed in ClusterGroupUpgrade CR. You can monitor the high-level progress of configuration policy reconciliation by using the following commands: USD export CLUSTER=<clusterName> USD oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq Example output { "lastTransitionTime": "2022-11-09T07:28:09Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } You can monitor the detailed cluster policy compliance status by using the RHACM dashboard or the command line. To check policy compliance by using oc , run the following command: USD oc get policies -n USDCLUSTER Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m To check policy status from the RHACM web console, perform the following actions: Click Governance Find policies . Click on a cluster policy to check it's status. When all of the cluster policies become compliant, GitOps ZTP installation and configuration for the cluster is complete. The ztp-done label is added to the cluster. In the reference configuration, the final policy that becomes compliant is the one defined in the *-du-validator-policy policy. This policy, when compliant on a cluster, ensures that all cluster configuration, Operator installation, and Operator configuration is complete. 19.4.6. Validating the generation of configuration policy CRs Policy custom resources (CRs) are generated in the same namespace as the PolicyGenTemplate from which they are created. The same troubleshooting flow applies to all policy CRs generated from a PolicyGenTemplate regardless of whether they are ztp-common , ztp-group , or ztp-site based, as shown using the following commands: USD export NS=<namespace> USD oc get policy -n USDNS The expected set of policy-wrapped CRs should be displayed. If the policies failed synchronization, use the following troubleshooting steps. Procedure To display detailed information about the policies, run the following command: USD oc describe -n openshift-gitops application policies Check for Status: Conditions: to show the error logs. For example, setting an invalid sourceFile->fileName: generates the error shown below: Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError Check for Status: Sync: . If there are log errors at Status: Conditions: , the Status: Sync: shows Unknown or Error : Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error When Red Hat Advanced Cluster Management (RHACM) recognizes that policies apply to a ManagedCluster object, the policy CR objects are applied to the cluster namespace. Check to see if the policies were copied to the cluster namespace: USD oc get policy -n USDCLUSTER Example output: NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d RHACM copies all applicable policies into the cluster namespace. The copied policy names have the format: <policyGenTemplate.Namespace>.<policyGenTemplate.Name>-<policyName> . Check the placement rule for any policies not copied to the cluster namespace. The matchSelector in the PlacementRule for those policies should match labels on the ManagedCluster object: USD oc get placementrule -n USDNS Note the PlacementRule name appropriate for the missing policy, common, group, or site, using the following command: USD oc get placementrule -n USDNS <placementRuleName> -o yaml The status-decisions should include your cluster name. The key-value pair of the matchSelector in the spec must match the labels on your managed cluster. Check the labels on the ManagedCluster object using the following command: USD oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq Check to see which policies are compliant using the following command: USD oc get policy -n USDCLUSTER If the Namespace , OperatorGroup , and Subscription policies are compliant but the Operator configuration policies are not, it is likely that the Operators did not install on the managed cluster. This causes the Operator configuration policies to fail to apply because the CRD is not yet applied to the spoke. 19.4.7. Restarting policy reconciliation You can restart policy reconciliation when unexpected compliance issues occur, for example, when the ClusterGroupUpgrade custom resource (CR) has timed out. Procedure A ClusterGroupUpgrade CR is generated in the namespace ztp-install by the Topology Aware Lifecycle Manager after the managed cluster becomes Ready : USD export CLUSTER=<clusterName> USD oc get clustergroupupgrades -n ztp-install USDCLUSTER If there are unexpected issues and the policies fail to become complaint within the configured timeout (the default is 4 hours), the status of the ClusterGroupUpgrade CR shows UpgradeTimedOut : USD oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Ready")]}' A ClusterGroupUpgrade CR in the UpgradeTimedOut state automatically restarts its policy reconciliation every hour. If you have changed your policies, you can start a retry immediately by deleting the existing ClusterGroupUpgrade CR. This triggers the automatic creation of a new ClusterGroupUpgrade CR that begins reconciling the policies immediately: USD oc delete clustergroupupgrades -n ztp-install USDCLUSTER Note that when the ClusterGroupUpgrade CR completes with status UpgradeCompleted and the managed cluster has the label ztp-done applied, you can make additional configuration changes using PolicyGenTemplate . Deleting the existing ClusterGroupUpgrade CR will not make the TALM generate a new CR. At this point, GitOps ZTP has completed its interaction with the cluster and any further interactions should be treated as an update and a new ClusterGroupUpgrade CR created for remediation of the policies. Additional resources For information about using Topology Aware Lifecycle Manager (TALM) to construct your own ClusterGroupUpgrade CR, see About the ClusterGroupUpgrade CR . 19.4.8. Changing applied managed cluster CRs using policies You can remove content from a custom resource (CR) that is deployed in a managed cluster through a policy. By default, all Policy CRs created from a PolicyGenTemplate CR have the complianceType field set to musthave . A musthave policy without the removed content is still compliant because the CR on the managed cluster has all the specified content. With this configuration, when you remove content from a CR, TALM removes the content from the policy but the content is not removed from the CR on the managed cluster. With the complianceType field to mustonlyhave , the policy ensures that the CR on the cluster is an exact match of what is specified in the policy. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have deployed a managed cluster from a hub cluster running RHACM. You have installed Topology Aware Lifecycle Manager on the hub cluster. Procedure Remove the content that you no longer need from the affected CRs. In this example, the disableDrain: false line was removed from the SriovOperatorConfig CR. Example CR apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" disableDrain: true enableInjector: true enableOperatorWebhook: true Change the complianceType of the affected policies to mustonlyhave in the group-du-sno-ranGen.yaml file. Example YAML # ... - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave # ... Create a ClusterGroupUpdates CR and specify the clusters that must receive the CR changes:: Example ClusterGroupUpdates CR apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction: Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-remove.yaml When you are ready to apply the changes, for example, during an appropriate maintenance window, change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the policies by running the following command: USD oc get <kind> <changed_cr_name> Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h When the COMPLIANCE STATE of the policy is Compliant , it means that the CR is updated and the unwanted content is removed. Check that the policies are removed from the targeted clusters by running the following command on the managed clusters: USD oc get <kind> <changed_cr_name> If there are no results, the CR is removed from the managed cluster. 19.4.9. Indication of done for GitOps ZTP installations GitOps Zero Touch Provisioning (ZTP) simplifies the process of checking the GitOps ZTP installation status for a cluster. The GitOps ZTP status moves through three phases: cluster installation, cluster configuration, and GitOps ZTP done. Cluster installation phase The cluster installation phase is shown by the ManagedClusterJoined and ManagedClusterAvailable conditions in the ManagedCluster CR . If the ManagedCluster CR does not have these conditions, or the condition is set to False , the cluster is still in the installation phase. Additional details about installation are available from the AgentClusterInstall and ClusterDeployment CRs. For more information, see "Troubleshooting GitOps ZTP". Cluster configuration phase The cluster configuration phase is shown by a ztp-running label applied the ManagedCluster CR for the cluster. GitOps ZTP done Cluster installation and configuration is complete in the GitOps ZTP done phase. This is shown by the removal of the ztp-running label and addition of the ztp-done label to the ManagedCluster CR. The ztp-done label shows that the configuration has been applied and the baseline DU configuration has completed cluster tuning. The transition to the GitOps ZTP done state is conditional on the compliant state of a Red Hat Advanced Cluster Management (RHACM) validator inform policy. This policy captures the existing criteria for a completed installation and validates that it moves to a compliant state only when GitOps ZTP provisioning of the managed cluster is complete. The validator inform policy ensures the configuration of the cluster is fully applied and Operators have completed their initialization. The policy validates the following: The target MachineConfigPool contains the expected entries and has finished updating. All nodes are available and not degraded. The SR-IOV Operator has completed initialization as indicated by at least one SriovNetworkNodeState with syncStatus: Succeeded . The PTP Operator daemon set exists. 19.5. Manually installing a single-node OpenShift cluster with ZTP You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service. Note If you are creating multiple managed clusters, use the SiteConfig method described in Deploying far edge sites with ZTP . Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads . 19.5.1. Generating GitOps ZTP installation and configuration CRs manually Use the generator entrypoint for the ztp-site-generate container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig and PolicyGenTemplate CRs. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create an output folder by running the following command: USD mkdir -p ./out Export the argocd directory from the ztp-site-generate container image: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./out The ./out directory has the reference PolicyGenTemplate and SiteConfig CRs in the out/argocd/example/ folder. Example output out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml Create an output folder for the site installation CRs: USD mkdir -p ./site-install Modify the example SiteConfig CR for the cluster type that you want to install. Copy example-sno.yaml to site-1-sno.yaml and modify the CR to match the details of the site and bare-metal host that you want to install, for example: # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" cpuPartitioningMode: AllNodes pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "marketplace", "NodeTuning" ] } } clusterLabels: common: true group-du-sno: "" sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" rootDeviceHints: wwn: "0x11111000000asd123" # diskPartition: # - device: /dev/disk/by-id/wwn-0x11111000000asd123 # match rootDeviceHints # partitions: # - mount_point: /var/imageregistry # size: 102500 # start: 344844 ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x11111000000asd123", "wipeTable": false, "partitions": [ { "sizeMiB": 16, "label": "httpevent1", "startMiB": 350000 }, { "sizeMiB": 16, "label": "httpevent2", "startMiB": 350016 } ] } ], "filesystem": [ { "device": "/dev/disk/by-partlabel/httpevent1", "format": "xfs", "wipeFilesystem": true }, { "device": "/dev/disk/by-partlabel/httpevent2", "format": "xfs", "wipeFilesystem": true } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Generate the Day 0 installation CRs by processing the modified SiteConfig CR site-1-sno.yaml by running the following command: USD podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 generator install site-1-sno.yaml /output Example output site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml Optional: Generate just the Day 0 MachineConfig installation CRs for a particular cluster type by processing the reference SiteConfig CR with the -E option. For example, run the following commands: Create an output folder for the MachineConfig CRs: USD mkdir -p ./site-machineconfig Generate the MachineConfig installation CRs: USD podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 generator install -E site-1-sno.yaml /output Example output site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml Generate and export the Day 2 configuration CRs using the reference PolicyGenTemplate CRs from the step. Run the following commands: Create an output folder for the Day 2 CRs: USD mkdir -p ./ref Generate and export the Day 2 configuration CRs: USD podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 generator config -N . /output The command generates example group and site-specific PolicyGenTemplate CRs for single-node OpenShift, three-node clusters, and standard clusters in the ./ref folder. Example output ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete. Additional resources Workload partitioning BMC addressing About root device hints Single-node OpenShift SiteConfig CR installation reference 19.5.2. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 19.5.3. Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.13, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. You have manually generated the installation and configuration custom resources (CRs). Procedure Edit the spec.kernelArguments specification in the InfraEnv CR to configure kernel arguments: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Note The SiteConfig CR generates the InfraEnv resource as part of the day-0 installation CRs. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 19.5.4. Installing a single managed cluster You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM). Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created the baseboard management controller (BMC) Secret and the image pull-secret Secret custom resources (CRs). See "Creating the managed bare-metal host secrets" for details. Your target bare-metal host meets the networking and hardware requirements for managed clusters. Procedure Create a ClusterImageSet for each specific cluster version to be deployed, for example clusterImageSet-4.13.yaml . A ClusterImageSet has the following format: apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.13.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.13.0-x86_64 2 1 The descriptive version that you want to deploy. 2 Specifies the releaseImage to deploy and determines the operating system image version. The discovery ISO is based on the image version as set by releaseImage , or the latest version if the exact version is unavailable. Apply the clusterImageSet CR: USD oc apply -f clusterImageSet-4.13.yaml Create the Namespace CR in the cluster-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2 1 2 The name of the managed cluster to provision. Apply the Namespace CR by running the following command: USD oc apply -f cluster-namespace.yaml Apply the generated day-0 CRs that you extracted from the ztp-site-generate container and customized to meet your requirements: USD oc apply -R ./site-install/site-sno-1 Additional resources Connectivity prerequisites for managed cluster networks Deploying LVM Storage on single-node OpenShift clusters Configuring LVM Storage using PolicyGenTemplate CRs 19.5.5. Monitoring the managed cluster installation status Ensure that cluster provisioning was successful by checking the cluster status. Prerequisites All of the custom resources have been configured and provisioned, and the Agent custom resource is created on the hub for the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster True indicates the managed cluster is ready. Check the agent status: USD oc get agent -n <cluster_name> Use the describe command to provide an in-depth description of the agent's condition. Statuses to be aware of include BackendError , InputError , ValidationsFailing , InstallationFailed , and AgentIsConnected . These statuses are relevant to the Agent and AgentClusterInstall custom resources. USD oc describe agent -n <cluster_name> Check the cluster provisioning status: USD oc get agentclusterinstall -n <cluster_name> Use the describe command to provide an in-depth description of the cluster provisioning status: USD oc describe agentclusterinstall -n <cluster_name> Check the status of the managed cluster's add-on services: USD oc get managedclusteraddon -n <cluster_name> Retrieve the authentication information of the kubeconfig file for the managed cluster: USD oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig 19.5.6. Troubleshooting the managed cluster Use this procedure to diagnose any installation issues that might occur with the managed cluster. Procedure Check the status of the managed cluster: USD oc get managedcluster Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h If the status in the AVAILABLE column is True , the managed cluster is being managed by the hub. If the status in the AVAILABLE column is Unknown , the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information. Check the AgentClusterInstall install status: USD oc get clusterdeployment -n <cluster_name> Example output NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h If the status in the INSTALLED column is false , the installation was unsuccessful. If the installation failed, enter the following command to review the status of the AgentClusterInstall resource: USD oc describe agentclusterinstall -n <cluster_name> <cluster_name> Resolve the errors and reset the cluster: Remove the cluster's managed cluster resource: USD oc delete managedcluster <cluster_name> Remove the cluster's namespace: USD oc delete namespace <cluster_name> This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the ManagedCluster CR deletion to complete before proceeding. Recreate the custom resources for the managed cluster. 19.5.7. RHACM generated cluster installation CRs reference Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using SiteConfig CRs for each site. Note Every managed cluster has its own namespace, and all of the installation CRs except for ManagedCluster and ClusterImageSet are under that namespace. ManagedCluster and ClusterImageSet are cluster-scoped, not namespace-scoped. The namespace and the CR names match the cluster name. The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the SiteConfig CRs that you configure. Table 19.7. Cluster installation CRs generated by RHACM CR Description Usage BareMetalHost Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. InfraEnv Contains information for installing OpenShift Container Platform on the target bare-metal host. Used with ClusterDeployment to generate the discovery ISO for the managed cluster. AgentClusterInstall Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster kubeconfig and credentials when the installation is complete. Specifies the managed cluster configuration information and provides status during the installation of the cluster. ClusterDeployment References the AgentClusterInstall CR to use. Used with InfraEnv to generate the discovery ISO for the managed cluster. NMStateConfig Provides network configuration information such as MAC address to IP mapping, DNS server, default route, and other network settings. Sets up a static IP address for the managed cluster's Kube API server. Agent Contains hardware information about the target bare-metal host. Created automatically on the hub when the target machine's discovery image boots. ManagedCluster When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. The hub uses this resource to manage and show the status of managed clusters. KlusterletAddonConfig Contains the list of services provided by the hub to be deployed to the ManagedCluster resource. Tells the hub which addon services to deploy to the ManagedCluster resource. Namespace Logical space for ManagedCluster resources existing on the hub. Unique per site. Propagates resources to the ManagedCluster . Secret Two CRs are created: BMC Secret and Image Pull Secret . BMC Secret authenticates into the target bare-metal host using its username and password. Image Pull Secret contains authentication information for the OpenShift Container Platform image installed on the target bare-metal host. ClusterImageSet Contains OpenShift Container Platform image information such as the repository and image name. Passed into resources to provide OpenShift Container Platform images. 19.6. Recommended single-node OpenShift cluster configuration for vDU application workloads Use the following reference information to understand the single-node OpenShift configurations required to deploy virtual distributed unit (vDU) applications in the cluster. Configurations include cluster optimizations for high performance workloads, enabling workload partitioning, and minimizing the number of reboots required postinstallation. Additional resources To deploy a single cluster by hand, see Manually installing a single-node OpenShift cluster with GitOps ZTP . To deploy a fleet of clusters using GitOps Zero Touch Provisioning (ZTP), see Deploying far edge sites with GitOps ZTP . 19.6.1. Running low latency applications on OpenShift Container Platform OpenShift Container Platform enables low latency processing for applications running on commercial off-the-shelf (COTS) hardware by using several technologies and specialized hardware devices: Real-time kernel for RHCOS Ensures workloads are handled with a high degree of process determinism. CPU isolation Avoids CPU scheduling delays and ensures CPU capacity is available consistently. NUMA-aware topology management Aligns memory and huge pages with CPU and PCI devices to pin guaranteed container memory and huge pages to the non-uniform memory access (NUMA) node. Pod resources for all Quality of Service (QoS) classes stay on the same NUMA node. This decreases latency and improves performance of the node. Huge pages memory management Using huge page sizes improves system performance by reducing the amount of system resources required to access page tables. Precision timing synchronization using PTP Allows synchronization between nodes in the network with sub-microsecond accuracy. 19.6.2. Recommended cluster host requirements for vDU application workloads Running vDU application workloads requires a bare-metal host with sufficient resources to run OpenShift Container Platform services and production workloads. Table 19.8. Minimum resource requirements Profile vCPU Memory Storage Minimum 4 to 8 vCPU 32GB of RAM 120GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Important The server must have a Baseboard Management Controller (BMC) when booting with virtual media. 19.6.3. Configuring host firmware for low latency and high performance Bare-metal hosts require the firmware to be configured before the host can be provisioned. The firmware configuration is dependent on the specific hardware and the particular requirements of your installation. Procedure Set the UEFI/BIOS Boot Mode to UEFI . In the host boot sequence order, set Hard drive first . Apply the specific firmware configuration for your hardware. The following table describes a representative firmware configuration for an Intel Xeon Skylake or Intel Cascade Lake server, based on the Intel FlexRAN 4G and 5G baseband PHY reference design. Important The exact firmware configuration depends on your specific hardware and network requirements. The following sample configuration is for illustrative purposes only. Table 19.9. Sample firmware configuration for an Intel Xeon Skylake or Cascade Lake server Firmware setting Configuration CPU Power and Performance Policy Performance Uncore Frequency Scaling Disabled Performance P-limit Disabled Enhanced Intel SpeedStep (R) Tech Enabled Intel Configurable TDP Enabled Configurable TDP Level Level 2 Intel(R) Turbo Boost Technology Enabled Energy Efficient Turbo Disabled Hardware P-States Disabled Package C-State C0/C1 state C1E Disabled Processor C6 Disabled Note Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments. 19.6.4. Connectivity prerequisites for managed cluster networks Before you can install and provision a managed cluster with the GitOps Zero Touch Provisioning (ZTP) pipeline, the managed cluster host must meet the following networking prerequisites: There must be bi-directional connectivity between the GitOps ZTP container in the hub cluster and the Baseboard Management Controller (BMC) of the target bare-metal host. The managed cluster must be able to resolve and reach the API hostname of the hub hostname and *.apps hostname. Here is an example of the API hostname of the hub and *.apps hostname: api.hub-cluster.internal.domain.com console-openshift-console.apps.hub-cluster.internal.domain.com The hub cluster must be able to resolve and reach the API and *.apps hostname of the managed cluster. Here is an example of the API hostname of the managed cluster and *.apps hostname: api.sno-managed-cluster-1.internal.domain.com console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com 19.6.5. Workload partitioning in single-node OpenShift with GitOps ZTP Workload partitioning configures OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved number of host CPUs. To configure workload partitioning with GitOps Zero Touch Provisioning (ZTP), you configure a cpuPartitioningMode field in the SiteConfig custom resource (CR) that you use to install the cluster and you apply a PerformanceProfile CR that configures the isolated and reserved CPUs on the host. Configuring the SiteConfig CR enables workload partitioning at cluster installation time and applying the PerformanceProfile CR configures the specific allocation of CPUs to reserved and isolated sets. Both of these steps happen at different points during cluster provisioning. Note Configuring workload partitioning by using the cpuPartitioningMode field in the SiteConfig CR is a Tech Preview feature in OpenShift Container Platform 4.13. Alternatively, you can specify cluster management CPU resources with the cpuset field of the SiteConfig custom resource (CR) and the reserved field of the group PolicyGenTemplate CR. The GitOps ZTP pipeline uses these values to populate the required fields in the workload partitioning MachineConfig CR ( cpuset ) and the PerformanceProfile CR ( reserved ) that configure the single-node OpenShift cluster. This method is a General Availability feature in OpenShift Container Platform 4.14. The workload partitioning configuration pins the OpenShift Container Platform infrastructure pods to the reserved CPU set. Platform services such as systemd, CRI-O, and kubelet run on the reserved CPU set. The isolated CPU sets are exclusively allocated to your container workloads. Isolating CPUs ensures that the workload has guaranteed access to the specified CPUs without contention from other applications running on the same node. All CPUs that are not isolated should be reserved. Important Ensure that reserved and isolated CPU sets do not overlap with each other. Additional resources For the recommended single-node OpenShift workload partitioning configuration, see Workload partitioning . 19.6.6. Recommended cluster install manifests The ZTP pipeline applies the following custom resources (CRs) during cluster installation. These configuration CRs ensure that the cluster meets the feature and performance requirements necessary for running a vDU application. Note When using the GitOps ZTP plugin and SiteConfig CRs for cluster deployment, the following MachineConfig CRs are included by default. Use the SiteConfig extraManifests filter to alter the CRs that are included by default. For more information, see Advanced managed cluster configuration with SiteConfig CRs . 19.6.6.1. Workload partitioning Single-node OpenShift clusters that run DU workloads require workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. Note Workload partitioning can be enabled during cluster installation only. You cannot disable workload partitioning postinstallation. You can however change the set of CPUs assigned to the isolated and reserved sets through the PerformanceProfile CR. Changes to CPU settings cause the node to reboot. Upgrading from OpenShift Container Platform 4.12 to 4.13+ When transitioning to using cpuPartitioningMode for enabling workload partitioning, remove the workload partitioning MachineConfig CRs from the /extra-manifest folder that you use to provision the cluster. Recommended SiteConfig CR configuration for workload partitioning apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "<site_name>" namespace: "<site_name>" spec: baseDomain: "example.com" cpuPartitioningMode: AllNodes 1 1 Set the cpuPartitioningMode field to AllNodes to configure workload partitioning for all nodes in the cluster. Verification Check that the applications and cluster system CPU pinning is correct. Run the following commands: Open a remote shell prompt to the managed cluster: USD oc debug node/example-sno-1 Check that the OpenShift infrastructure applications CPU pinning is correct: sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done Example output pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53 Check that the system applications CPU pinning is correct: sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done Example output pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53 19.6.6.2. Reduced platform management footprint To reduce the overall management footprint of the platform, a MachineConfig custom resource (CR) is required that places all Kubernetes-specific mount points in a new namespace separate from the host operating system. The following base64-encoded example MachineConfig CR illustrates this configuration. Recommended container mount namespace configuration (01-container-mount-ns-and-kubelet-conf-master.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 19.6.6.3. SCTP Stream Control Transmission Protocol (SCTP) is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol. Recommended SCTP configuration (03-sctp-machine-config-master.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 19.6.6.4. Accelerated container startup The following MachineConfig CR configures core OpenShift processes and containers to use all available CPU cores during system startup and shutdown. This accelerates the system recovery during initial boot and reboots. Recommended accelerated container startup configuration (04-accelerated-container-startup-master.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 04-accelerated-container-startup-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,#!/bin/bash
#
# Temporarily reset the core system processes's CPU affinity to be unrestricted to accelerate startup and shutdown
#
# The defaults below can be overridden via environment variables
#

# The default set of critical processes whose affinity should be temporarily unbound:
CRITICAL_PROCESSES=${CRITICAL_PROCESSES:-"crio kubelet NetworkManager conmon dbus"}

# Default wait time is 600s = 10m:
MAXIMUM_WAIT_TIME=${MAXIMUM_WAIT_TIME:-600}

# Default steady-state threshold = 2%
# Allowed values:
#  4  - absolute pod count (+/-)
#  4% - percent change (+/-)
#  -1 - disable the steady-state check
STEADY_STATE_THRESHOLD=${STEADY_STATE_THRESHOLD:-2%}

# Default steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
STEADY_STATE_WINDOW=${STEADY_STATE_WINDOW:-60}

# Default steady-state allows any pod count to be "steady state"
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
STEADY_STATE_MINIMUM=${STEADY_STATE_MINIMUM:-0}

#######################################################

KUBELET_CPU_STATE=/var/lib/kubelet/cpu_manager_state
FULL_CPU_STATE=/sys/fs/cgroup/cpuset/cpuset.cpus
KUBELET_CONF=/etc/kubernetes/kubelet.conf
unrestrictedCpuset() {
  local cpus
  if [[ -e $KUBELET_CPU_STATE ]]; then
    cpus=$(jq -r '.defaultCpuSet' <$KUBELET_CPU_STATE)
    if [[ -n "${cpus}" && -e ${KUBELET_CONF} ]]; then
      reserved_cpus=$(jq -r '.reservedSystemCPUs' </etc/kubernetes/kubelet.conf)
      if [[ -n "${reserved_cpus}" ]]; then
        # Use taskset to merge the two cpusets
        cpus=$(taskset -c "${reserved_cpus},${cpus}" grep -i Cpus_allowed_list /proc/self/status | awk '{print $2}')
      fi
    fi
  fi
  if [[ -z $cpus ]]; then
    # fall back to using all cpus if the kubelet state is not configured yet
    [[ -e $FULL_CPU_STATE ]] || return 1
    cpus=$(<$FULL_CPU_STATE)
  fi
  echo $cpus
}

restrictedCpuset() {
  for arg in $(</proc/cmdline); do
    if [[ $arg =~ ^systemd.cpu_affinity= ]]; then
      echo ${arg#*=}
      return 0
    fi
  done
  return 1
}

resetAffinity() {
  local cpuset="$1"
  local failcount=0
  local successcount=0
  logger "Recovery: Setting CPU affinity for critical processes \"$CRITICAL_PROCESSES\" to $cpuset"
  for proc in $CRITICAL_PROCESSES; do
    local pids="$(pgrep $proc)"
    for pid in $pids; do
      local tasksetOutput
      tasksetOutput="$(taskset -apc "$cpuset" $pid 2>&1)"
      if [[ $? -ne 0 ]]; then
        echo "ERROR: $tasksetOutput"
        ((failcount++))
      else
        ((successcount++))
      fi
    done
  done

  logger "Recovery: Re-affined $successcount pids successfully"
  if [[ $failcount -gt 0 ]]; then
    logger "Recovery: Failed to re-affine $failcount processes"
    return 1
  fi
}

setUnrestricted() {
  logger "Recovery: Setting critical system processes to have unrestricted CPU access"
  resetAffinity "$(unrestrictedCpuset)"
}

setRestricted() {
  logger "Recovery: Resetting critical system processes back to normally restricted access"
  resetAffinity "$(restrictedCpuset)"
}

currentAffinity() {
  local pid="$1"
  taskset -pc $pid | awk -F': ' '{print $2}'
}

within() {
  local last=$1 current=$2 threshold=$3
  local delta=0 pchange
  delta=$(( current - last ))
  if [[ $current -eq $last ]]; then
    pchange=0
  elif [[ $last -eq 0 ]]; then
    pchange=1000000
  else
    pchange=$(( ( $delta * 100) / last ))
  fi
  echo -n "last:$last current:$current delta:$delta pchange:${pchange}%: "
  local absolute limit
  case $threshold in
    *%)
      absolute=${pchange##-} # absolute value
      limit=${threshold%%%}
      ;;
    *)
      absolute=${delta##-} # absolute value
      limit=$threshold
      ;;
  esac
  if [[ $absolute -le $limit ]]; then
    echo "within (+/-)$threshold"
    return 0
  else
    echo "outside (+/-)$threshold"
    return 1
  fi
}

steadystate() {
  local last=$1 current=$2
  if [[ $last -lt $STEADY_STATE_MINIMUM ]]; then
    echo "last:$last current:$current Waiting to reach $STEADY_STATE_MINIMUM before checking for steady-state"
    return 1
  fi
  within $last $current $STEADY_STATE_THRESHOLD
}

waitForReady() {
  logger "Recovery: Waiting ${MAXIMUM_WAIT_TIME}s for the initialization to complete"
  local lastSystemdCpuset="$(currentAffinity 1)"
  local lastDesiredCpuset="$(unrestrictedCpuset)"
  local t=0 s=10
  local lastCcount=0 ccount=0 steadyStateTime=0
  while [[ $t -lt $MAXIMUM_WAIT_TIME ]]; do
    sleep $s
    ((t += s))
    # Re-check the current affinity of systemd, in case some other process has changed it
    local systemdCpuset="$(currentAffinity 1)"
    # Re-check the unrestricted Cpuset, as the allowed set of unreserved cores may change as pods are assigned to cores
    local desiredCpuset="$(unrestrictedCpuset)"
    if [[ $systemdCpuset != $lastSystemdCpuset || $lastDesiredCpuset != $desiredCpuset ]]; then
      resetAffinity "$desiredCpuset"
      lastSystemdCpuset="$(currentAffinity 1)"
      lastDesiredCpuset="$desiredCpuset"
    fi

    # Detect steady-state pod count
    ccount=$(crictl ps | wc -l)
    if steadystate $lastCcount $ccount; then
      ((steadyStateTime += s))
      echo "Steady-state for ${steadyStateTime}s/${STEADY_STATE_WINDOW}s"
      if [[ $steadyStateTime -ge $STEADY_STATE_WINDOW ]]; then
        logger "Recovery: Steady-state (+/- $STEADY_STATE_THRESHOLD) for ${STEADY_STATE_WINDOW}s: Done"
        return 0
      fi
    else
      if [[ $steadyStateTime -gt 0 ]]; then
        echo "Resetting steady-state timer"
        steadyStateTime=0
      fi
    fi
    lastCcount=$ccount
  done
  logger "Recovery: Recovery Complete Timeout"
}

main() {
  if ! unrestrictedCpuset >&/dev/null; then
    logger "Recovery: No unrestricted Cpuset could be detected"
    return 1
  fi

  if ! restrictedCpuset >&/dev/null; then
    logger "Recovery: No restricted Cpuset has been configured.  We are already running unrestricted."
    return 0
  fi

  # Ensure we reset the CPU affinity when we exit this script for any reason
  # This way either after the timer expires or after the process is interrupted
  # via ^C or SIGTERM, we return things back to the way they should be.
  trap setRestricted EXIT

  logger "Recovery: Recovery Mode Starting"
  setUnrestricted
  waitForReady
}

if [[ "${BASH_SOURCE[0]}" = "${0}" ]]; then
  main "${@}"
  exit $?
fi
 mode: 493 path: /usr/local/bin/accelerated-container-startup.sh systemd: units: - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container startup [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: accelerated-container-startup.service - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container shutdown DefaultDependencies=no [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=-1 # Steady-state window = 60s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=60 [Install] WantedBy=shutdown.target reboot.target halt.target enabled: true name: accelerated-container-shutdown.service 19.6.6.5. Automatic kernel crash dumps with kdump kdump is a Linux kernel feature that creates a kernel crash dump when the kernel crashes. kdump is enabled with the following MachineConfig CRs. Recommended MachineConfig to remove ice driver (05-kdump-config-master.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh Recommended kdump configuration (06-kdump-master.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 19.6.6.6. Disable automatic CRI-O cache wipe After an uncontrolled host shutdown or cluster reboot, CRI-O automatically deletes the entire CRI-O cache, causing all images to be pulled from the registry when the node reboots. This can result in unacceptably slow recovery times or recovery failures. To prevent this from happening in single-node OpenShift clusters that you install with GitOps ZTP, disable the CRI-O delete cache feature during cluster installation. Recommended MachineConfig CR to disable CRI-O cache wipe on control plane nodes (99-crio-disable-wipe-master.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml Recommended MachineConfig CR to disable CRI-O cache wipe on worker nodes (99-crio-disable-wipe-worker.yaml) # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 19.6.6.7. Configuring crun as the default container runtime The following ContainerRuntimeConfig custom resources (CRs) configure crun as the default OCI container runtime for control plane and worker nodes. The crun container runtime is fast and lightweight and has a low memory footprint. Important For optimal performance, enable crun for control plane and worker nodes in single-node OpenShift, three-node OpenShift, and standard clusters. To avoid the cluster rebooting when the CR is applied, apply the change as a GitOps ZTP additional Day 0 install-time manifest. Recommended ContainerRuntimeConfig CR for control plane nodes (enable-crun-master.yaml) apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun Recommended ContainerRuntimeConfig CR for worker nodes (enable-crun-worker.yaml) apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun 19.6.7. Recommended postinstallation cluster configurations When the cluster installation is complete, the ZTP pipeline applies the following custom resources (CRs) that are required to run DU workloads. Note In GitOps ZTP v4.10 and earlier, you configure UEFI secure boot with a MachineConfig CR. This is no longer required in GitOps ZTP v4.11 and later. In v4.11, you configure UEFI secure boot for single-node OpenShift clusters by updating the spec.clusters.nodes.bootMode field in the SiteConfig CR that you use to install the cluster. For more information, see Deploying a managed cluster with SiteConfig and GitOps ZTP . 19.6.7.1. Operator namespaces and Operator groups Single-node OpenShift clusters that run DU workloads require the following OperatorGroup and Namespace custom resources (CRs): Local Storage Operator Logging Operator PTP Operator SR-IOV Network Operator The following CRs are required: Recommended Storage Operator Namespace and OperatorGroup configuration --- apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage Recommended Cluster Logging Operator Namespace and OperatorGroup configuration --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging Recommended PTP Operator Namespace and OperatorGroup configuration --- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp Recommended SR-IOV Operator Namespace and OperatorGroup configuration --- apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator 19.6.7.2. Operator subscriptions Single-node OpenShift clusters that run DU workloads require the following Subscription CRs. The subscription provides the location to download the following Operators: Local Storage Operator Logging Operator PTP Operator SR-IOV Network Operator For each Operator subscription, specify the channel to get the Operator from. The recommended channel is stable . You can specify Manual or Automatic updates. In Automatic mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. In Manual mode, new Operator versions are installed only when they are explicitly approved. Note Use Manual mode for subscriptions. This allows you to control the timing of Operator updates to fit within planned/scheduled maintenance windows. Recommended Local Storage Operator subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "stable" name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Recommended SR-IOV Operator subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Recommended PTP Operator subscription --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "stable" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Recommended Cluster Logging Operator subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 19.6.7.3. Cluster logging and log forwarding Single-node OpenShift clusters that run DU workloads require logging and log forwarding for debugging. The following ClusterLogging and ClusterLogForwarder custom resources (CRs) are required. Recommended cluster logging and log forwarding configuration apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: "Managed" curation: type: "curator" curator: schedule: "30 3 * * *" collection: logs: type: "fluentd" fluentd: {} Recommended log forwarding configuration apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: "kafka" name: kafka-open url: tcp://10.46.55.190:9092/test inputs: - name: infra-logs infrastructure: {} pipelines: - name: audit-logs inputRefs: - audit outputRefs: - kafka-open - name: infrastructure-logs inputRefs: - infrastructure outputRefs: - kafka-open Set the spec.outputs.url field to the URL of the Kafka server where the logs are forwarded to. 19.6.7.4. Performance profile Single-node OpenShift clusters that run DU workloads require a Node Tuning Operator performance profile to use real-time host capabilities and services. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. The following example PerformanceProfile CR illustrates the required single-node OpenShift cluster configuration. Recommended performance profile configuration apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "module_blacklist=irdma" cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G node: 0 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false Table 19.10. PerformanceProfile CR options for single-node OpenShift clusters PerformanceProfile CR field Description metadata.name Ensure that name matches the following fields set in related GitOps ZTP custom resources (CRs): include=openshift-node-performance-USD{PerformanceProfile.metadata.name} in TunedPerformancePatch.yaml name: 50-performance-USD{PerformanceProfile.metadata.name} in validatorCRs/informDuValidator.yaml spec.additionalKernelArgs "efi=runtime" Configures UEFI secure boot for the cluster host. spec.cpu.isolated Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. Important The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system. spec.cpu.reserved Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. spec.hugepages.pages Set the number of huge pages ( count ) Set the huge pages size ( size ). Set node to the NUMA node where the hugepages are allocated ( node ) spec.realTimeKernel Set enabled to true to use the realtime kernel. spec.workloadHints Use workloadHints to define the set of top level flags for different type of workloads. The example configuration configures the cluster for low latency and high performance. 19.6.7.5. Configuring cluster time synchronization Run a one-time system time synchronization job for control plane or worker nodes. Recommended one time time-sync for control plane nodes ( 99-sync-time-once-master.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service Recommended one time time-sync for worker nodes ( 99-sync-time-once-worker.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 19.6.7.6. PTP Single-node OpenShift clusters use Precision Time Protocol (PTP) for network time synchronization. The following example PtpConfig CR illustrates the required PTP configuration for ordinary clocks. The exact configuration you apply will depend on the node hardware and specific use case. Recommended PTP configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp spec: profile: - name: "ordinary" # The interface name is hardware-specific interface: ens5f0 ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "ordinary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" 19.6.7.7. Extended Tuned profile Single-node OpenShift clusters that run DU workloads require additional performance tuning configurations necessary for high-performance workloads. The following example Tuned CR extends the Tuned profile: Recommended extended Tuned profile configuration apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "master" priority: 19 profile: performance-patch Table 19.11. Tuned CR options for single-node OpenShift clusters Tuned CR field Description spec.profile.data The include line that you set in spec.profile.data must match the associated PerformanceProfile CR name. For example, include=openshift-node-performance-USD{PerformanceProfile.metadata.name} . When using the non-realtime kernel, remove the timer_migration override line from the [sysctl] section. 19.6.7.8. SR-IOV Single root I/O virtualization (SR-IOV) is commonly used to enable fronthaul and midhaul networks. The following YAML example configures SR-IOV for a single-node OpenShift cluster. Note The configuration of the SriovNetwork CR will vary depending on your specific network and infrastructure requirements. Recommended SriovOperatorConfig configuration apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: "node-role.kubernetes.io/master": "" enableInjector: true enableOperatorWebhook: true Table 19.12. SriovOperatorConfig CR options for single-node OpenShift clusters SriovOperatorConfig CR field Description spec.enableInjector Disable Injector pods to reduce the number of management pods. Start with the Injector pods enabled, and only disable them after verifying the user manifests. If the injector is disabled, containers that use SR-IOV resources must explicitly assign them in the requests and limits section of the container spec. For example: containers: - name: my-sriov-workload-container resources: limits: openshift.io/<resource_name>: "1" requests: openshift.io/<resource_name>: "1" spec.enableOperatorWebhook Disable OperatorWebhook pods to reduce the number of management pods. Start with the OperatorWebhook pods enabled, and only disable them after verifying the user manifests. Recommended SriovNetwork configuration apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "" namespace: openshift-sriov-network-operator spec: resourceName: "du_mh" networkNamespace: openshift-sriov-network-operator vlan: "150" spoofChk: "" ipam: "" linkState: "" maxTxRate: "" minTxRate: "" vlanQoS: "" trust: "" capabilities: "" Table 19.13. SriovNetwork CR options for single-node OpenShift clusters SriovNetwork CR field Description spec.vlan Configure vlan with the VLAN for the midhaul network. Recommended SriovNetworkNodePolicy configuration apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: # Attributes for Mellanox/Intel based NICs deviceType: netdevice/vfio-pci isRdma: true/false nicSelector: # The exact physical function name must match the hardware used pfNames: [ens7f0] nodeSelector: node-role.kubernetes.io/master: "" numVfs: 8 priority: 10 resourceName: du_mh Table 19.14. SriovNetworkPolicy CR options for single-node OpenShift clusters SriovNetworkNodePolicy CR field Description spec.deviceType Configure deviceType as vfio-pci or netdevice . spec.nicSelector.pfNames Specifies the interface connected to the fronthaul network. spec.numVfs Specifies the number of VFs for the fronthaul network. 19.6.7.9. Console Operator Use the cluster capabilities feature to prevent the Console Operator from being installed. When the node is centrally managed it is not needed. Removing the Operator provides additional space and capacity for application workloads. To disable the Console Operator during the installation of the managed cluster, set the following in the spec.clusters.0.installConfigOverrides field of the SiteConfig custom resource (CR): installConfigOverrides: "{\"capabilities\":{\"baselineCapabilitySet\": \"None\" }}" 19.6.7.10. Alertmanager Single-node OpenShift clusters that run DU workloads require reduced CPU resources consumed by the OpenShift Container Platform monitoring components. The following ConfigMap custom resource (CR) disables Alertmanager. Recommended cluster monitoring configuration (ReduceMonitoringFootprint.yaml) apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: ran.openshift.io/ztp-deploy-wave: "1" data: config.yaml: | alertmanagerMain: enabled: false prometheusK8s: retention: 24h 19.6.7.11. Operator Lifecycle Manager Single-node OpenShift clusters that run distributed unit workloads require consistent access to CPU resources. Operator Lifecycle Manager (OLM) collects performance data from Operators at regular intervals, resulting in an increase in CPU utilisation. The following ConfigMap custom resource (CR) disables the collection of Operator performance data by OLM. Recommended cluster OLM configuration ( ReduceOLMFootprint.yaml ) apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True 19.6.7.12. LVM Storage You can dynamically provision local storage on single-node OpenShift clusters with Logical Volume Manager (LVM) Storage. Note The recommended storage solution for single-node OpenShift is the Local Storage Operator. Alternatively, you can use LVM Storage but it requires additional CPU resources to be allocated. The following YAML example configures the storage of the node to be available to OpenShift Container Platform applications. Recommended LVMCluster configuration (StorageLVMCluster.yaml) apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: odf-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /usr/disk/by-path/pci-0000:11:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90 Table 19.15. LVMCluster CR options for single-node OpenShift clusters LVMCluster CR field Description deviceSelector.paths Configure the disks used for LVM storage. If no disks are specified, the LVM Storage uses all the unused disks in the specified thin pool. 19.6.7.13. Network diagnostics Single-node OpenShift clusters that run DU workloads require less inter-pod network connectivity checks to reduce the additional load created by these pods. The following custom resource (CR) disables these checks. Recommended network diagnostics configuration (DisableSnoNetworkDiag.yaml) apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: disableNetworkDiagnostics: true Additional resources Deploying far edge sites using ZTP 19.7. Validating single-node OpenShift cluster tuning for vDU application workloads Before you can deploy virtual distributed unit (vDU) applications, you need to tune and configure the cluster host firmware and various other cluster configuration settings. Use the following information to validate the cluster configuration to support vDU workloads. Additional resources For more information about single-node OpenShift clusters tuned for vDU application deployments, see Reference configuration for deploying vDUs on single-node OpenShift . 19.7.1. Recommended firmware configuration for vDU cluster hosts Use the following table as the basis to configure the cluster host firmware for vDU applications running on OpenShift Container Platform 4.13. Note The following table is a general recommendation for vDU cluster host firmware configuration. Exact firmware settings will depend on your requirements and specific hardware platform. Automatic setting of firmware is not handled by the zero touch provisioning pipeline. Table 19.16. Recommended cluster host firmware settings Firmware setting Configuration Description HyperTransport (HT) Enabled HyperTransport (HT) bus is a bus technology developed by AMD. HT provides a high-speed link between the components in the host memory and other system peripherals. UEFI Enabled Enable booting from UEFI for the vDU host. CPU Power and Performance Policy Performance Set CPU Power and Performance Policy to optimize the system for performance over energy efficiency. Uncore Frequency Scaling Disabled Disable Uncore Frequency Scaling to prevent the voltage and frequency of non-core parts of the CPU from being set independently. Uncore Frequency Maximum Sets the non-core parts of the CPU such as cache and memory controller to their maximum possible frequency of operation. Performance P-limit Disabled Disable Performance P-limit to prevent the Uncore frequency coordination of processors. Enhanced Intel(R) SpeedStep Tech Enabled Enable Enhanced Intel SpeedStep to allow the system to dynamically adjust processor voltage and core frequency that decreases power consumption and heat production in the host. Intel(R) Turbo Boost Technology Enabled Enable Turbo Boost Technology for Intel-based CPUs to automatically allow processor cores to run faster than the rated operating frequency if they are operating below power, current, and temperature specification limits. Intel Configurable TDP Enabled Enables Thermal Design Power (TDP) for the CPU. Configurable TDP Level Level 2 TDP level sets the CPU power consumption required for a particular performance rating. TDP level 2 sets the CPU to the most stable performance level at the cost of power consumption. Energy Efficient Turbo Disabled Disable Energy Efficient Turbo to prevent the processor from using an energy-efficiency based policy. Hardware P-States Enabled or Disabled Enable OS-controlled P-States to allow power saving configurations. Disable P-states (performance states) to optimize the operating system and CPU for performance over power consumption. Package C-State C0/C1 state Use C0 or C1 states to set the processor to a fully active state (C0) or to stop CPU internal clocks running in software (C1). C1E Disabled CPU Enhanced Halt (C1E) is a power saving feature in Intel chips. Disabling C1E prevents the operating system from sending a halt command to the CPU when inactive. Processor C6 Disabled C6 power-saving is a CPU feature that automatically disables idle CPU cores and cache. Disabling C6 improves system performance. Sub-NUMA Clustering Disabled Sub-NUMA clustering divides the processor cores, cache, and memory into multiple NUMA domains. Disabling this option can increase performance for latency-sensitive workloads. Note Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments. Note Enable both C-states and OS-controlled P-States to allow per pod power management. 19.7.2. Recommended cluster configurations to run vDU applications Clusters running virtualized distributed unit (vDU) applications require a highly tuned and optimized configuration. The following information describes the various elements that you require to support vDU workloads in OpenShift Container Platform 4.13 clusters. 19.7.2.1. Recommended cluster MachineConfig CRs for single-node OpenShift clusters Check that the MachineConfig custom resources (CRs) that you extract from the ztp-site-generate container are applied in the cluster. The CRs can be found in the extracted out/source-crs/extra-manifest/ folder. The following MachineConfig CRs from the ztp-site-generate container configure the cluster host: Table 19.17. Recommended GitOps ZTP MachineConfig CRs MachineConfig CR Description 01-container-mount-ns-and-kubelet-conf-master.yaml 01-container-mount-ns-and-kubelet-conf-worker.yaml Configures the container mount namespace and kubelet configuration. 02-workload-partitioning.yaml Configures workload partitioning for the cluster. Apply this MachineConfig CR when you install the cluster. Note If you use the cpuPartitioningMode field in the SiteConfig CR to configure workload partitioning, you do not need to use the 02-workload-partitioning.yaml CR. Using the cpuPartitioningMode field is a Technology Preview feature in OpenShift Container Platform 4.13. For more information, see "Workload partitioning in single-node OpenShift with GitOps ZTP". 03-sctp-machine-config-master.yaml 03-sctp-machine-config-worker.yaml Loads the SCTP kernel module. These MachineConfig CRs are optional and can be omitted if you do not require this kernel module. 04-accelerated-container-startup-master.yaml 04-accelerated-container-startup-worker.yaml Configures accelerated startup for the cluster. 05-kdump-config-master.yaml 05-kdump-config-worker.yaml 06-kdump-master.yaml 06-kdump-worker.yaml Configures kdump crash reporting for the cluster. 99-crio-disable-wipe-master.yaml 99-crio-disable-wipe-worker.yaml Disables the automatic CRI-O cache wipe following cluster reboot. Additional resources Extracting source CRs from the ztp-site-generate container 19.7.2.2. Recommended cluster Operators The following Operators are required for clusters running virtualized distributed unit (vDU) applications and are a part of the baseline reference configuration: Node Tuning Operator (NTO). NTO packages functionality that was previously delivered with the Performance Addon Operator, which is now a part of NTO. PTP Operator SR-IOV Network Operator Red Hat OpenShift Logging Operator Local Storage Operator 19.7.2.3. Recommended cluster kernel configuration Always use the latest supported real-time kernel version in your cluster. Ensure that you apply the following configurations in the cluster: Ensure that the following additionalKernelArgs are set in the cluster performance profile: spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "module_blacklist=irdma" Ensure that the performance-patch profile in the Tuned CR configures the correct CPU isolation set that matches the isolated CPU set in the related PerformanceProfile CR, for example: spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name, for example: # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from the [sysctl] section data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable 19.7.2.4. Checking the realtime kernel version Always use the latest version of the realtime kernel in your OpenShift Container Platform clusters. If you are unsure about the kernel version that is in use in the cluster, you can compare the current realtime kernel version to the release version with the following procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. You have installed podman . Procedure Run the following command to get the cluster version: USD OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') Get the release image SHA number: USD DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64) Run the release image container and extract the kernel version that is packaged with cluster's current release: USD podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##' Example output 4.18.0-305.49.1.rt7.121.el8_4.x86_64 This is the default realtime kernel version that ships with the release. Note The realtime kernel is denoted by the string .rt in the kernel version. Verification Check that the kernel version listed for the cluster's current release matches actual realtime kernel that is running in the cluster. Run the following commands to check the running realtime kernel version: Open a remote shell connection to the cluster node: USD oc debug node/<node_name> Check the realtime kernel version: sh-4.4# uname -r Example output 4.18.0-305.49.1.rt7.121.el8_4.x86_64 19.7.3. Checking that the recommended cluster configurations are applied You can check that clusters are running the correct configuration. The following procedure describes how to check the various configurations that you require to deploy a DU application in OpenShift Container Platform 4.13 clusters. Prerequisites You have deployed a cluster and tuned it for vDU workloads. You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Check that the default OperatorHub sources are disabled. Run the following command: USD oc get operatorhub cluster -o yaml Example output spec: disableAllDefaultSources: true Check that all required CatalogSource resources are annotated for workload partitioning ( PreferredDuringScheduling ) by running the following command: USD oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.target\.workload\.openshift\.io/management}{"\n"}{end}' Example output certified-operators -- {"effect": "PreferredDuringScheduling"} community-operators -- {"effect": "PreferredDuringScheduling"} ran-operators 1 redhat-marketplace -- {"effect": "PreferredDuringScheduling"} redhat-operators -- {"effect": "PreferredDuringScheduling"} 1 CatalogSource resources that are not annotated are also returned. In this example, the ran-operators CatalogSource resource is not annotated and does not have the PreferredDuringScheduling annotation. Note In a properly configured vDU cluster, only a single annotated catalog source is listed. Check that all applicable OpenShift Container Platform Operator namespaces are annotated for workload partitioning. This includes all Operators installed with core OpenShift Container Platform and the set of additional Operators included in the reference DU tuning configuration. Run the following command: USD oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.workload\.openshift\.io/allowed}{"\n"}{end}' Example output default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management Important Additional Operators must not be annotated for workload partitioning. In the output from the command, additional Operators should be listed without any value on the right side of the -- separator. Check that the ClusterLogging configuration is correct. Run the following commands: Validate that the appropriate input and output logs are configured: USD oc get -n openshift-logging ClusterLogForwarder instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: "2022-07-19T21:51:41Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "1030342" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open ... Check that the curation schedule is appropriate for your application: USD oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: "2022-07-07T18:22:56Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "235796" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed ... Check that the web console is disabled ( managementState: Removed ) by running the following command: USD oc get consoles.operator.openshift.io cluster -o jsonpath="{ .spec.managementState }" Example output Removed Check that chronyd is disabled on the cluster node by running the following commands: USD oc debug node/<node_name> Check the status of chronyd on the node: sh-4.4# chroot /host sh-4.4# systemctl status chronyd Example output ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5) Check that the PTP interface is successfully synchronized to the primary clock using a remote shell connection to the linuxptp-daemon container and the PTP Management Client ( pmc ) tool: Set the USDPTP_POD_NAME variable with the name of the linuxptp-daemon pod by running the following command: USD PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name) Run the following command to check the sync status of the PTP device: USD oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' Example output sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 Run the following pmc command to check the PTP clock status: USD oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP' Example output sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00 1 master_offset should be between -100 and 100 ns. 2 Indicates that the PTP clock is synchronized to a master, and the local clock is not the grandmaster clock. Check that the expected master offset value corresponding to the value in /var/run/ptp4l.0.config is found in the linuxptp-daemon-container log: USD oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container Example output phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533 Check that the SR-IOV configuration is correct by running the following commands: Check that the disableDrain value in the SriovOperatorConfig resource is set to true : USD oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath="{.spec.disableDrain}{'\n'}" Example output true Check that the SriovNetworkNodeState sync status is Succeeded by running the following command: USD oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath="{.items[*].status.syncStatus}{'\n'}" Example output Succeeded Verify that the expected number and configuration of virtual functions ( Vfs ) under each interface configured for SR-IOV is present and correct in the .status.interfaces field. For example: USD oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml Example output apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState ... status: interfaces: ... - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: "8086" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: "8086" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: "8086" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: "8086" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: "8086" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: "8086" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: "8086" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: "8086" vfID: 7 Check that the cluster performance profile is correct. The cpu and hugepages sections will vary depending on your hardware configuration. Run the following command: USD oc get PerformanceProfile openshift-node-performance-profile -o yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: "2022-07-19T21:51:31Z" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: "33558" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Available - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Upgradeable - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Progressing - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile Note CPU settings are dependent on the number of cores available on the server and should align with workload partitioning settings. hugepages configuration is server and application dependent. Check that the PerformanceProfile was successfully applied to the cluster by running the following command: USD oc get performanceprofile openshift-node-performance-profile -o jsonpath="{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\n'}{end}" Example output Available -- True Upgradeable -- True Progressing -- False Degraded -- False Check the Tuned performance patch settings by running the following command: USD oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml Example output apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: "2022-07-18T10:33:52Z" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: "34024" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch 1 The cpu list in cmdline=nohz_full= will vary based on your hardware configuration. Check that cluster networking diagnostics are disabled by running the following command: USD oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}' Example output true Check that the Kubelet housekeeping interval is tuned to slower rate. This is set in the containerMountNS machine config. Run the following command: USD oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION Example output Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Check that Grafana and alertManagerMain are disabled and that the Prometheus retention period is set to 24h by running the following command: USD oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath="{ .data.config\.yaml }" Example output grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h Use the following commands to verify that Grafana and alertManagerMain routes are not found in the cluster: USD oc get route -n openshift-monitoring alertmanager-main USD oc get route -n openshift-monitoring grafana Both queries should return Error from server (NotFound) messages. Check that there is a minimum of 4 CPUs allocated as reserved for each of the PerformanceProfile , Tuned performance-patch, workload partitioning, and kernel command line arguments by running the following command: USD oc get performanceprofile -o jsonpath="{ .items[0].spec.cpu.reserved }" Example output 0-3 Note Depending on your workload requirements, you might require additional reserved CPUs to be allocated. 19.8. Advanced managed cluster configuration with SiteConfig resources You can use SiteConfig custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time. 19.8.1. Customizing extra installation manifests in the GitOps ZTP pipeline You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the SiteConfig custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig CRs at install time makes the installation process more efficient. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs. In your custom /siteconfig directory, create an /extra-manifest folder for your extra manifests. The following example illustrates a sample /siteconfig with /extra-manifest folder: siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml └── extra-manifest └── 01-example-machine-config.yaml Add your custom extra manifest CRs to the siteconfig/extra-manifest directory. In your SiteConfig CR, enter the directory name in the extraManifestPath field, for example: clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" extraManifestPath: extra-manifest Save the SiteConfig CRs and /extra-manifest CRs and push them to the site configuration repo. The GitOps ZTP pipeline appends the CRs in the /extra-manifest directory to the default set of extra manifests during cluster provisioning. 19.8.2. Filtering custom resources using SiteConfig filters By using filters, you can easily customize SiteConfig custom resources (CRs) to include or exclude other CRs for use in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. You can specify an inclusionDefault value of include or exclude for the SiteConfig CR, along with a list of the specific extraManifest RAN CRs that you want to include or exclude. Setting inclusionDefault to include makes the GitOps ZTP pipeline apply all the files in /source-crs/extra-manifest during installation. Setting inclusionDefault to exclude does the opposite. You can exclude individual CRs from the /source-crs/extra-manifest folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml CR at installation time. Some additional optional filtering scenarios are also described. Prerequisites You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure To prevent the GitOps ZTP pipeline from applying the 03-sctp-machine-config-worker.yaml CR file, apply the following YAML in the SiteConfig CR: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.13" sshPublicKey: "<ssh_public_key>" clusters: - clusterName: "site1-sno-du" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml The GitOps ZTP pipeline skips the 03-sctp-machine-config-worker.yaml CR during installation. All other CRs in /source-crs/extra-manifest are applied. Save the SiteConfig CR and push the changes to the site configuration repository. The GitOps ZTP pipeline monitors and adjusts what CRs it applies based on the SiteConfig filter instructions. Optional: To prevent the GitOps ZTP pipeline from applying all the /source-crs/extra-manifest CRs during cluster installation, apply the following YAML in the SiteConfig CR: - clusterName: "site1-sno-du" extraManifests: filter: inclusionDefault: exclude Optional: To exclude all the /source-crs/extra-manifest RAN CRs and instead include a custom CR file during installation, edit the custom SiteConfig CR to set the custom manifests folder and the include file, for example: clusters: - clusterName: "site1-sno-du" extraManifestPath: "<custom_manifest_folder>" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml 1 Replace <custom_manifest_folder> with the name of the folder that contains the custom installation CRs, for example, user-custom-manifest/ . 2 Set inclusionDefault to exclude to prevent the GitOps ZTP pipeline from applying the files in /source-crs/extra-manifest during installation. The following example illustrates the custom folder structure: siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml 19.9. Advanced managed cluster configuration with PolicyGenTemplate resources You can use PolicyGenTemplate CRs to deploy custom functionality in your managed clusters. 19.9.1. Deploying additional changes to clusters If you require cluster configuration changes outside of the base GitOps Zero Touch Provisioning (ZTP) pipeline configuration, there are three options: Apply the additional configuration after the GitOps ZTP pipeline is complete When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget. Add content to the GitOps ZTP library The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required. Create extra manifests for the cluster installation Extra manifests are applied during installation and make the installation process more efficient. Important Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OpenShift Container Platform. Additional resources Customizing extra installation manifests in the GitOps ZTP pipeline 19.9.2. Using PolicyGenTemplate CRs to override source CRs content PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate container. You can think of PolicyGenTemplate CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR. The following example procedure describes how to update fields in the generated PerformanceProfile CR for the reference configuration based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate based on your requirements. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. Procedure Review the baseline source CR for existing content. You can review the source CRs listed in the reference PolicyGenTemplate CRs by extracting them from the GitOps Zero Touch Provisioning (ZTP) container. Create an /out folder: USD mkdir -p ./out Extract the source CRs: USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13.1 extract /home/ztp --tar | tar x -C ./out Review the baseline PerformanceProfile CR in ./out/source-crs/PerformanceProfile.yaml : apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true Note Any fields in the source CR which contain USD... are removed from the generated CR if they are not provided in the PolicyGenTemplate CR. Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file. The following example PolicyGenTemplate CR stanza supplies appropriate CPU specifications, sets the hugepages configuration, and adds a new field that sets globallyDisableIrqLoadBalancing to false. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: "2-19,22-39" reserved: "0-1,20-21" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application. Example output The GitOps ZTP application generates an RHACM policy that contains the generated PerformanceProfile CR. The contents of that CR are derived by merging the metadata and spec contents from the PerformanceProfile entry in the PolicyGenTemplate onto the source CR. The resulting CR has the following content: --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In the /source-crs folder that you extract from the ztp-site-generate container, the USD syntax is not used for template substitution as implied by the syntax. Rather, if the policyGen tool sees the USD prefix for a string and you do not specify a value for that field in the related PolicyGenTemplate CR, the field is omitted from the output CR entirely. An exception to this is the USDmcp variable in /source-crs YAML files that is substituted with the specified value for mcp from the PolicyGenTemplate CR. For example, in example/policygentemplates/group-du-standard-ranGen.yaml , the value for mcp is worker : spec: bindingRules: group-du-standard: "" mcp: "worker" The policyGen tool replace instances of USDmcp with worker in the output CRs. 19.9.3. Adding custom content to the GitOps ZTP pipeline Perform the following procedure to add new content to the GitOps ZTP pipeline. Procedure Create a subdirectory named source-crs in the directory containing the kustomization.yaml file for the PolicyGenTemplate custom resource (CR). Add your custom CRs to the source-crs subdirectory, as shown in the following example: example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml 1 The source-crs subdirectory must be in the same directory as the kustomization.yaml file. Important To use your own resources, ensure that the custom CR names differ from the default source CRs provided in the ZTP container. Update the required PolicyGenTemplate CRs to include references to the content you added in the source-crs/custom-crs directory, as shown in the following example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-dev" namespace: "ztp-clusters" spec: bindingRules: dev: "true" mcp: "master" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: "group-dev-cluster-log-ns" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: "group-dev-cluster-log-operator-group" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: "group-dev-cluster-log-sub" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: "group-dev-lso-ns" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: "group-dev-lso-operator-group" - fileName: StorageSubscription.yaml remediationAction: inform policyName: "group-dev-lso-sub" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: "group-dev-pao-ns" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: "group-dev-pao-cat-source" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: "group-dev-pao-sub" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: "group-dev-elasticsearch-ns" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: "group-dev-elasticsearch-operator-group" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: "group-dev-apiserver-config" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: "group-dev-disable-nic-lldp" 1 2 Set fileName to include the custom CR subdirectory from the /source-crs parent, such as <subdirectory>/<filename> . Commit the PolicyGenTemplate change in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application. Update the ClusterGroupUpgrade CR to include the changed PolicyGenTemplate and save it as cgu-test.yaml , as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated ClusterGroupUpgrade CR by running the following command: USD oc apply -f cgu-test.yaml Verification Check that the updates have succeeded by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies 19.9.4. Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances. You can override the default policy evaluation intervals with PolicyGenTemplate custom resources (CRs). You configure duration settings that define how long a ConfigurationPolicy CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies. The GitOps Zero Touch Provisioning (ZTP) policy generator generates ConfigurationPolicy CR policies with pre-defined policy evaluation intervals. The default value for the noncompliant state is 10 seconds. The default value for the compliant state is 10 minutes. To disable the evaluation interval, set the value to never . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the evaluation interval for all policies in a PolicyGenTemplate CR, add evaluationInterval to the spec field, and then set the appropriate compliant and noncompliant values. For example: spec: evaluationInterval: compliant: 30m noncompliant: 20s To configure the evaluation interval for the spec.sourceFiles object in a PolicyGenTemplate CR, add evaluationInterval to the sourceFiles field, for example: spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: "sriov-sub-policy" evaluationInterval: compliant: never noncompliant: 10s Commit the PolicyGenTemplate CRs files in the Git repository and push your changes. Verification Check that the managed spoke cluster policies are monitored at the expected intervals. Log in as a user with cluster-admin privileges on the managed cluster. Get the pods that are running in the open-cluster-management-agent-addon namespace. Run the following command: USD oc get pods -n open-cluster-management-agent-addon Example output NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d Check the applied policies are being evaluated at the expected interval in the logs for the config-policy-controller pod: USD oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb Example output 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"} 19.9.5. Signalling GitOps ZTP cluster deployment completion with validator inform policies Create a validator inform policy that signals when the GitOps Zero Touch Provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters. Procedure Create a standalone PolicyGenTemplate custom resource (CR) that contains the source file validatorCRs/informDuValidator.yaml . You only need one standalone PolicyGenTemplate CR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters: Example single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml) apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno-validator" 1 namespace: "ztp-group" 2 spec: bindingRules: group-du-sno: "" 3 bindingExcludedRules: ztp-done: "" 4 mcp: "master" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: "du-policy" 7 1 The name of PolicyGenTemplates object. This name is also used as part of the names for the placementBinding , placementRule , and policy that are created in the requested namespace . 2 This value should match the namespace used in the group PolicyGenTemplates . 3 The group-du-* label defined in bindingRules must exist in the SiteConfig files. 4 The label defined in bindingExcludedRules must be`ztp-done:`. The ztp-done label is used in coordination with the Topology Aware Lifecycle Manager. 5 mcp defines the MachineConfigPool object that is used in the source file validatorCRs/informDuValidator.yaml . It should be master for single node and three-node cluster deployments and worker for standard cluster deployments. 6 Optional. The default value is inform . 7 This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is group-du-sno-validator-du-policy . Commit the PolicyGenTemplate CR file in your Git repository and push the changes. Additional resources Upgrading GitOps ZTP 19.9.6. Configuring power states using PolicyGenTemplates CRs For low latency and high-performance edge deployments, it is necessary to disable or limit C-states and P-states. With this configuration, the CPU runs at a constant frequency, which is typically the maximum turbo frequency. This ensures that the CPU is always running at its maximum speed, which results in high performance and low latency. This leads to the best latency for workloads. However, this also leads to the highest power consumption, which might not be necessary for all workloads. Workloads can be classified as critical or non-critical, with critical workloads requiring disabled C-state and P-state settings for high performance and low latency, while non-critical workloads use C-state and P-state settings for power savings at the expense of some latency and performance. You can configure the following three power states using GitOps Zero Touch Provisioning (ZTP): High-performance mode provides ultra low latency at the highest power consumption. Performance mode provides low latency at a relatively high power consumption. Power saving balances reduced power consumption with increased latency. The default configuration is for a low latency, performance mode. PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details onto the base source CRs provided with the GitOps plugin in the ztp-site-generate container. Configure the power states by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . The following common prerequisites apply to configuring all three power states. Prerequisites You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD. You have followed the procedure described in "Preparing the GitOps ZTP site configuration repository". Additional resources Understanding workload hints Configuring workload hints manually 19.9.6.1. Configuring performance mode using PolicyGenTemplate CRs Follow this example to set performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . Performance mode provides low latency at a relatively high power consumption. Prerequisites You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance". Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to set performance mode. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 19.9.6.2. Configuring high-performance mode using PolicyGenTemplate CRs Follow this example to set high performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . High performance mode provides ultra low latency at the highest power consumption. Prerequisites You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance". Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to set high-performance mode. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 19.9.6.3. Configuring power saving mode using PolicyGenTemplate CRs Follow this example to set power saving mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml . The power saving mode balances reduced power consumption with increased latency. Prerequisites You enabled C-states and OS-controlled P-states in the BIOS. Procedure Update the PolicyGenTemplate entry for PerformanceProfile in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates as follows to configure power saving mode. It is recommended to configure the CPU governor for the power saving mode through the additional kernel arguments object. - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - "cpufreq.default_governor=schedutil" 1 1 The schedutil governor is recommended, however, other governors that can be used include ondemand and powersave . Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. Verification Select a worker node in your deployed cluster from the list of nodes identified by using the following command: USD oc get nodes Log in to the node by using the following command: USD oc debug node/<node-name> Replace <node-name> with the name of the node you want to verify the power state on. Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths as shown in the following example: # chroot /host Run the following command to verify the applied power state: # cat /proc/cmdline Expected output For power saving mode the intel_pstate=passive . Additional resources Enabling critical workloads for power saving configurations Configuring host firmware for low latency and high performance Preparing the GitOps ZTP site configuration repository 19.9.6.4. Maximizing power savings Limiting the maximum CPU frequency is recommended to achieve maximum power savings. Enabling C-states on the non-critical workload CPUs without restricting the maximum CPU frequency negates much of the power savings by boosting the frequency of the critical CPUs. Maximize power savings by updating the sysfs plugin fields, setting an appropriate value for max_perf_pct in the TunedPerformancePatch CR for the reference configuration. This example based on the group-du-sno-ranGen.yaml describes the procedure to follow to restrict the maximum CPU frequency. Prerequisites You have configured power savings mode as described in "Using PolicyGenTemplate CRs to configure power savings mode". Procedure Update the PolicyGenTemplate entry for TunedPerformancePatch in the group-du-sno-ranGen.yaml reference file in out/argocd/example/policygentemplates . To maximize power savings, add max_perf_pct as shown in the following example: - fileName: TunedPerformancePatch.yaml policyName: "config-policy" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1 1 The max_perf_pct controls the maximum frequency the cpufreq driver is allowed to set as a percentage of the maximum supported CPU frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Note To maximize power savings, set a lower value. Setting a lower value for max_perf_pct limits the maximum CPU frequency, thereby reducing power consumption, but also potentially impacting performance. Experiment with different values and monitor the system's performance and power consumption to find the optimal setting for your use-case. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application. 19.9.7. Configuring LVM Storage using PolicyGenTemplate CRs You can configure Logical Volume Manager (LVM) Storage for managed clusters that you deploy with GitOps Zero Touch Provisioning (ZTP). Note You use LVM Storage to persist event subscriptions when you use PTP events or bare-metal hardware events with HTTP transport. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Create a Git repository where you manage your custom site configuration data. Procedure To configure LVM Storage for new managed clusters, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: - fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.13 policyName: subscription-policies Add the LVMCluster CR to spec.sourceFiles in your specific group or individual site configuration file. For example, in the group-du-sno-ranGen.yaml file, add the following: - fileName: StorageLVMCluster.yaml policyName: "lvmo-config" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 1 This example configuration creates a volume group ( vg1 ) with all the available devices, except the disk where OpenShift Container Platform is installed. A thin-pool logical volume is also created. Merge any other required changes and files with your custom site repository. Commit the PolicyGenTemplate changes in Git, and then push the changes to your site configuration repository to deploy LVM Storage to new sites using GitOps ZTP. 19.9.8. Configuring PTP events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure PTP events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 19.9.8.1. Configuring PTP events that use HTTP transport You can configure PTP events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the transport host: - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043 Note In OpenShift Container Platform 4.13 or later, you do not need to set the transportHost field in the PtpOperatorConfig resource when you use HTTP transport with PTP events. Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be one of PtpConfigMaster.yaml , PtpConfigSlave.yaml , or PtpConfigSlaveCvl.yaml depending on your requirements. PtpConfigSlaveCvl.yaml configures linuxptp services for an Intel E810 Columbiaville NIC. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Using PolicyGenTemplate CRs to override source CRs content 19.9.8.2. Configuring PTP events that use AMQP transport You can configure PTP events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Add the following YAML into .spec.sourceFiles in the common-ranGen.yaml file to configure the AMQP Operator: #AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" Apply the following PolicyGenTemplate changes to group-du-3node-ranGen.yaml , group-du-sno-ranGen.yaml , or group-du-standard-ranGen.yaml files according to your requirements: In .sourceFiles , add the PtpOperatorConfig CR file that configures the AMQ transport host to the config-policy : - fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: "amqp://amq-router.amq-router.svc.cluster.local" Configure the linuxptp and phc2sys for the PTP clock type and interface. For example, add the following stanza into .sourceFiles : - fileName: PtpConfigSlave.yaml 1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1" 2 ptp4lOpts: "-2 -s --summary_interval -4" 3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs 1 Can be one PtpConfigMaster.yaml , PtpConfigSlave.yaml , or PtpConfigSlaveCvl.yaml depending on your requirements. PtpConfigSlaveCvl.yaml configures linuxptp services for an Intel E810 Columbiaville NIC. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . 2 Device specific interface name. 3 You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. 4 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 5 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Apply the following PolicyGenTemplate changes to your specific site YAML files, for example, example-sno-site.yaml : In .sourceFiles , add the Interconnect CR file that configures the AMQ router to the config-policy : - fileName: AmqInstance.yaml policyName: "config-policy" Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP. Additional resources Installing the AMQ messaging bus For more information about container image registries, see OpenShift image registry overview . 19.9.9. Configuring bare-metal events with PolicyGenTemplate CRs You can use the GitOps ZTP pipeline to configure bare-metal events that use HTTP or AMQP transport. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 19.9.9.1. Configuring bare-metal events that use HTTP transport You can configure bare-metal events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure Configure the Bare Metal Event Relay Operator by adding the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml 1 policyName: "config-policy" spec: nodeSelector: {} transportHost: "http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043" logLevel: "info" 1 Each baseboard management controller (BMC) requires a single HardwareEvent CR only. Note In OpenShift Container Platform 4.13 or later, you do not need to set the transportHost field in the HardwareEvent custom resource (CR) when you use HTTP transport with bare-metal events. Merge any other required changes and files with your custom site repository. Push the changes to your site configuration repository to deploy bare-metal events to new sites with GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" Additional resources Installing the Bare Metal Event Relay using the CLI Creating the bare-metal event and Secret CRs 19.9.9.2. Configuring bare-metal events that use AMQP transport You can configure bare-metal events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. Procedure To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to spec.sourceFiles in the common-ranGen.yaml file: # AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" # Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy" Add the Interconnect CR to .spec.sourceFiles in the site configuration file, for example, the example-sno-site.yaml file: - fileName: AmqInstance.yaml policyName: "config-policy" Add the HardwareEvent CR to spec.sourceFiles in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml file: - fileName: HardwareEvent.yaml policyName: "config-policy" spec: nodeSelector: {} transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local" 1 logLevel: "info" 1 The transportHost URL is composed of the existing AMQ Interconnect CR name and namespace . For example, in transportHost: "amqp://amq-router.amq-router.svc.cluster.local" , the AMQ Interconnect name and namespace are both set to amq-router . Note Each baseboard management controller (BMC) requires a single HardwareEvent resource only. Commit the PolicyGenTemplate change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP. Create the Redfish Secret by running the following command: USD oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>" 19.9.10. Configuring the Image Registry Operator for local caching of images OpenShift Container Platform manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times. Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the /var/lib/containers/storage directory in the case of an unexpected shutdown. To address long image download times, you can create a local image registry on remote managed clusters using GitOps Zero Touch Provisioning (ZTP). This is useful in Edge computing scenarios where clusters are deployed at the far edge of the network. Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the SiteConfig CR that you use to install the remote managed cluster. After installation, you configure the local image registry using a PolicyGenTemplate CR. Then, the GitOps ZTP pipeline creates Persistent Volume (PV) and Persistent Volume Claim (PVC) CRs and patches the imageregistry configuration. Note The local image registry can only be used for user application images and cannot be used for the OpenShift Container Platform or Operator Lifecycle Manager operator images. Additional resources OpenShift Container Platform registry overview . 19.9.10.1. Configuring disk partitioning with SiteConfig Configure disk partitioning for a managed cluster using a SiteConfig CR and GitOps Zero Touch Provisioning (ZTP). The disk partition details in the SiteConfig CR must match the underlying disk. Important You must complete this procedure at installation time. Prerequisites Install Butane. Procedure Create the storage.bu file: variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation fails. 3 Specify the size of the partition. If the value is too small, the deployments fails. Convert the storage.bu file to an Ignition file by running the following command: USD butane storage.bu Example output {"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}} Use a tool such as JSON Pretty Print to convert the output into JSON format. Copy the output into the .spec.clusters.nodes.ignitionConfigOverride field in the SiteConfig CR: [...] spec: clusters: - nodes: - ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } [...] Note If the .spec.clusters.nodes.ignitionConfigOverride field does not exist, create it. Verification During or after installation, verify on the hub cluster that the BareMetalHost object shows the annotation by running the following command: USD oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"] Example output "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}" After installation, check the single-node OpenShift disk status: Enter into a debug session on the single-node OpenShift node by running the following command. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-sno-node Set /host as the root directory within the debug shell by running the following command. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host List information about all available block devices by running the following command: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers Display information about the file system disk space usage by running the following command: # df -h Example output Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000 19.9.10.2. Configuring the image registry using PolicyGenTemplate CRs Use PolicyGenTemplate (PGT) CRs to apply the CRs required to configure the image registry and patch the imageregistry configuration. Prerequisites You have configured a disk partition in the managed cluster. You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP). Procedure Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate PolicyGenTemplate CR. For example, to configure an individual site, add the following YAML to the file example-sno-site.yaml : sourceFiles: # storage class - fileName: StorageClass.yaml policyName: "sc-for-image-registry" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: "100" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: "pvc-for-image-registry" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: "pv-for-image-registry" metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" - fileName: ImageRegistryConfig.yaml policyName: "config-for-image-registry" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: storage: pvc: claim: "image-registry-pvc" 1 Set the appropriate value for ztp-deploy-wave depending on whether you are configuring image registries at the site, common, or group level. ztp-deploy-wave: "100" is suitable for development or testing because it allows you to group the referenced source files together. 2 In ImageRegistryPV.yaml , ensure that the spec.local.path field is set to /var/imageregistry to match the value set for the mount_point field in the SiteConfig CR. Important Do not set complianceType: mustonlyhave for the - fileName: ImageRegistryConfig.yaml configuration. This can cause the registry pod deployment to fail. Commit the PolicyGenTemplate change in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. Verification Use the following steps to troubleshoot errors with the local image registry on the managed clusters: Verify successful login to the registry while logged in to the managed cluster. Run the following commands: Export the managed cluster name: USD cluster=<managed_cluster_name> Get the managed cluster kubeconfig details: USD oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster Download and export the cluster kubeconfig : USD oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster Verify access to the image registry from the managed cluster. See "Accessing the registry". Check that the Config CRD in the imageregistry.operator.openshift.io group instance is not reporting errors. Run the following command while logged in to the managed cluster: USD oc get image.config.openshift.io cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2021-10-08T19:02:39Z" generation: 5 name: cluster resourceVersion: "688678648" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice Check that the PersistentVolumeClaim on the managed cluster is populated with data. Run the following command while logged in to the managed cluster: USD oc get pv image-registry-sc Check that the registry* pod is running and is located under the openshift-image-registry namespace. USD oc get pods -n openshift-image-registry | grep registry* Example output cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d Check that the disk partition on the managed cluster is correct: Open a debug shell to the managed cluster: USD oc debug node/sno-1.example.com Run lsblk to check the host disk partitions: sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom 1 /var/imageregistry indicates that the disk is correctly partitioned. Additional resources Accessing the registry 19.9.11. Using hub templates in PolicyGenTemplate CRs Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP). Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values. Important Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created. The following supported hub template functions are available for use in GitOps ZTP with TALM: fromConfigmap returns the value of the provided data key in the named ConfigMap resource. Note There is a 1 MiB size limit for ConfigMap CRs. The effective size for ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap : argocd.argoproj.io/sync-options: Replace=true base64enc returns the base64-encoded value of the input string base64dec returns the decoded value of the base64-encoded input string indent returns the input string with added indent spaces autoindent returns the input string with added indent spaces based on the spacing used in the parent template toInt casts and returns the integer value of the input value toBool converts the input string into a boolean value, and returns the boolean Various Open source community functions are also available for use with GitOps ZTP. Additional resources RHACM support for hub cluster templates in configuration policies 19.9.11.1. Example hub templates The following code examples are valid hub templates. Each of these templates return values from the ConfigMap CR with the name test-config in the default namespace. Returns the value with the key common-key : {{hub fromConfigMap "default" "test-config" "common-key" hub}} Returns a string by using the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}} Casts and returns a boolean value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}} Casts and returns an integer value from the concatenated value of the .ManagedClusterName field and the string -name : {{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}} 19.9.11.2. Specifying host NICs in site PolicyGenTemplate CRs with hub cluster templates You can manage host NICs in a single ConfigMap CR and use hub cluster templates to populate the custom NIC values in the generated polices that get applied to the cluster hosts. Using hub cluster templates in site PolicyGenTemplate (PGT) CRs means that you do not need to create multiple single site PGT CRs for each site. The following example shows you how to use a single ConfigMap CR to manage cluster host NICs and apply them to the cluster as polices by using a single PolicyGenTemplate site CR. Note When you use the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application. Procedure Create a ConfigMap resource that describes the NICs for a group of hosts. For example: apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: "8" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: "10" example-sno-du_fh-vlan: "140" example-sno-du_mh-numVfs: "8" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: "10" example-sno-du_mh-vlan: "150" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Note The ConfigMap must be in the same namespace with the policy that has the hub template substitution. Commit the ConfigMap CR in Git, and then push to the Git repository being monitored by the Argo CD application. Create a site PGT CR that uses templates to pull the required data from the ConfigMap object. For example: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "site" namespace: "ztp-site" spec: remediationAction: inform bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" spec: resourceName: du_fh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-pf" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-pf" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_mh Commit the site PolicyGenTemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". 19.9.11.3. Specifying VLAN IDs in group PolicyGenTemplate CRs with hub cluster templates You can manage VLAN IDs for managed clusters in a single ConfigMap CR and use hub cluster templates to populate the VLAN IDs in the generated polices that get applied to the clusters. The following example shows how you how manage VLAN IDs in single ConfigMap CR and apply them in individual cluster polices by using a single PolicyGenTemplate group CR. Note When using the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a ConfigMap CR that describes the VLAN IDs for a group of cluster hosts. For example: apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: "101" site-2-vlan: "234" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Note The ConfigMap must be in the same namespace with the policy that has the hub template substitution. Commit the ConfigMap CR in Git, and then push to the Git repository being monitored by the Argo CD application. Create a group PGT CR that uses a hub template to pull the required VLAN IDs from the ConfigMap object. For example, add the following YAML snippet to the group PGT CR: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data" (printf "%s-vlan" .ManagedClusterName) | toInt hub}}' Commit the group PolicyGenTemplate CR in Git, and then push to the Git repository being monitored by the Argo CD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs". 19.9.11.4. Syncing new ConfigMap changes to existing PolicyGenTemplate CRs Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a PolicyGenTemplate CR that pulls information from a ConfigMap CR using hub cluster templates. Procedure Update the contents of your ConfigMap CR, and apply the changes in the hub cluster. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following: Option 1: Delete the existing policy. ArgoCD uses the PolicyGenTemplate CR to immediately recreate the deleted policy. For example, run the following command: USD oc delete policy <policy_name> -n <policy_namespace> Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap . For example: USD oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1" Note You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing . Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example: USD oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace> Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml : apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated policy: USD oc apply -f cgr-example.yaml 19.10. Updating managed clusters with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of multiple clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. 19.10.1. About the Topology Aware Lifecycle Manager configuration The Topology Aware Lifecycle Manager (TALM) manages the deployment of Red Hat Advanced Cluster Management (RHACM) policies for one or more OpenShift Container Platform clusters. Using TALM in a large network of clusters allows the phased rollout of policies to the clusters in limited batches. This helps to minimize possible service disruptions when updating. With TALM, you can control the following actions: The timing of the update The number of RHACM-managed clusters The subset of managed clusters to apply the policies to The update order of the clusters The set of policies remediated to the cluster The order of policies remediated to the cluster The assignment of a canary cluster For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) offers the following features: Create a backup of a deployment before an upgrade Pre-caching images for clusters with limited bandwidth TALM supports the orchestration of the OpenShift Container Platform y-stream and z-stream updates, and day-two operations on y-streams and z-streams. 19.10.2. About managed policies used with Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) uses RHACM policies for cluster updates. TALM can be used to manage the rollout of any policy CR where the remediationAction field is set to inform . Supported use cases include the following: Manual user creation of policy CRs Automatically generated policies from the PolicyGenTemplate custom resource definition (CRD) For policies that update an Operator subscription with manual approval, TALM provides additional functionality that approves the installation of the updated Operator. For more information about managed policies, see Policy Overview in the RHACM documentation. For more information about the PolicyGenTemplate CRD, see the "About the PolicyGenTemplate CRD" section in "Configuring managed clusters with policies and PolicyGenTemplate resources". 19.10.3. Installing the Topology Aware Lifecycle Manager by using the web console You can use the OpenShift Container Platform web console to install the Topology Aware Lifecycle Manager. Prerequisites Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected regitry. Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Topology Aware Lifecycle Manager from the list of available Operators, and then click Install . Keep the default selection of Installation mode ["All namespaces on the cluster (default)"] and Installed Namespace ("openshift-operators") to ensure that the Operator is installed properly. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the All Namespaces namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any containers in the cluster-group-upgrades-controller-manager pod that are reporting issues. 19.10.4. Installing the Topology Aware Lifecycle Manager by using the CLI You can use the OpenShift CLI ( oc ) to install the Topology Aware Lifecycle Manager (TALM). Prerequisites Install the OpenShift CLI ( oc ). Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected registry. Log in as a user with cluster-admin privileges. Procedure Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, talm-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: "stable" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f talm-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.13.x Topology Aware Lifecycle Manager 4.13.x Succeeded Verify that the TALM is up and running: USD oc get deploy -n openshift-operators Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s 19.10.5. About the ClusterGroupUpgrade CR The Topology Aware Lifecycle Manager (TALM) builds the remediation plan from the ClusterGroupUpgrade CR for a group of clusters. You can define the following specifications in a ClusterGroupUpgrade CR: Clusters in the group Blocking ClusterGroupUpgrade CRs Applicable list of managed policies Number of concurrent updates Applicable canary updates Actions to perform before and after the update Update timing You can control the start time of an update using the enable field in the ClusterGroupUpgrade CR. For example, if you have a scheduled maintenance window of four hours, you can prepare a ClusterGroupUpgrade CR with the enable field set to false . You can set the timeout by configuring the spec.remediationStrategy.timeout setting as follows: spec remediationStrategy: maxConcurrency: 1 timeout: 240 You can use the batchTimeoutAction to determine what happens if an update fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or abort to stop policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. To apply the changes, you set the enabled field to true . For more information see the "Applying update policies to managed clusters" section. As TALM works through remediation of the policies to the specified clusters, the ClusterGroupUpgrade CR can report true or false statuses for a number of conditions. Note After TALM completes a cluster update, the cluster does not update again under the control of the same ClusterGroupUpgrade CR. You must create a new ClusterGroupUpgrade CR in the following cases: When you need to update the cluster again When the cluster changes to non-compliant with the inform policy after being updated 19.10.5.1. Selecting clusters TALM builds a remediation plan and selects clusters based on the following fields: The clusterLabelSelector field specifies the labels of the clusters that you want to update. This consists of a list of the standard label selectors from k8s.io/apimachinery/pkg/apis/meta/v1 . Each selector in the list uses either label value pairs or label expressions. Matches from each selector are added to the final list of clusters along with the matches from the clusterSelector field and the cluster field. The clusters field specifies a list of clusters to update. The canaries field specifies the clusters for canary updates. The maxConcurrency field specifies the number of clusters to update in a batch. The actions field specifies beforeEnable actions that TALM takes as it begins the update process, and afterCompletion actions that TALM takes as it completes policy remediation for each cluster. You can use the clusters , clusterLabelSelector , and clusterSelector fields together to create a combined list of clusters. The remediation plan starts with the clusters listed in the canaries field. Each canary cluster forms a single-cluster batch. Sample ClusterGroupUpgrade CR with the enabled field set to false apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: "" deleteClusterLabels: upgrade-running: "" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: "" backup: false clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: 1 Specifies the action that TALM takes when it completes policy remediation for each cluster. 2 Specifies the action that TALM takes as it begins the update process. 3 Defines the list of clusters to update. 4 The enable field is set to false . 5 Lists the user-defined set of policies to remediate. 6 Defines the specifics of the cluster updates. 7 Defines the clusters for canary updates. 8 Defines the maximum number of concurrent updates in a batch. The number of remediation batches is the number of canary clusters, plus the number of clusters, except the canary clusters, divided by the maxConcurrency value. The clusters that are already compliant with all the managed policies are excluded from the remediation plan. 9 Displays the parameters for selecting clusters. 10 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . 11 Displays information about the status of the updates. 12 The ClustersSelected condition shows that all selected clusters are valid. 13 The Validated condition shows that all selected clusters have been validated. Note Any failures during the update of a canary cluster stops the update process. When the remediation plan is successfully created, you can you set the enable field to true and TALM starts to update the non-compliant clusters with the specified managed policies. Note You can only make changes to the spec fields if the enable field of the ClusterGroupUpgrade CR is set to false . 19.10.5.2. Validating TALM checks that all specified managed policies are available and correct, and uses the Validated condition to report the status and reasons as follows: true Validation is completed. false Policies are missing or invalid, or an invalid platform image has been specified. 19.10.5.3. Pre-caching Clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. On single-node OpenShift clusters, you can use pre-caching to avoid this. The container image pre-caching starts when you create a ClusterGroupUpgrade CR with the preCaching field set to true . TALM compares the available disk space with the estimated OpenShift Container Platform image size to ensure that there is enough space. If a cluster has insufficient space, TALM cancels pre-caching for that cluster and does not remediate policies on it. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. For more information see the "Using the container image pre-cache feature" section. 19.10.5.4. Creating a backup For single-node OpenShift, TALM can create a backup of a deployment before an update. If the update fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a ClusterGroupUpgrade CR with the backup field set to true . To ensure that the contents of the backup are up to date, the backup is not taken until you set the enable field in the ClusterGroupUpgrade CR to true . TALM uses the BackupSucceeded condition to report the status and reasons as follows: true Backup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update fails for that cluster but proceeds for all other clusters. false Backup is still in progress for one or more clusters or has failed for all clusters. For more information, see the "Creating a backup of cluster resources before upgrade" section. 19.10.5.5. Updating clusters TALM enforces the policies following the remediation plan. Enforcing the policies for subsequent batches starts immediately after all the clusters of the current batch are compliant with all the managed policies. If the batch times out, TALM moves on to the batch. The timeout value of a batch is the spec.timeout field divided by the number of batches in the remediation plan. TALM uses the Progressing condition to report the status and reasons as follows: true TALM is remediating non-compliant policies. false The update is not in progress. Possible reasons for this are: All clusters are compliant with all the managed policies. The update has timed out as policy remediation took too long. Blocking CRs are missing from the system or have not yet completed. The ClusterGroupUpgrade CR is not enabled. Backup is still in progress. Note The managed policies apply in the order that they are listed in the managedPolicies field in the ClusterGroupUpgrade CR. One managed policy is applied to the specified clusters at a time. When a cluster complies with the current policy, the managed policy is applied to it. Sample ClusterGroupUpgrade CR in the Progressing state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 1 The Progressing fields show that TALM is in the process of remediating policies. 19.10.5.6. Update status TALM uses the Succeeded condition to report the status and reasons as follows: true All clusters are compliant with the specified managed policies. false Policy remediation failed as there were no clusters available for remediation, or because policy remediation took too long for one of the following reasons: The current batch contains canary updates and the cluster in the batch does not comply with all the managed policies within the batch timeout. Clusters did not comply with the managed policies within the timeout value specified in the remediationStrategy field. Sample ClusterGroupUpgrade CR in the Succeeded state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: "True" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: "True" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: "False" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: "True" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 2 In the Progressing fields, the status is false as the update has completed; clusters are compliant with all the managed policies. 3 The Succeeded fields show that the validations completed successfully. 1 The status field includes a list of clusters and their respective statuses. The status of a cluster can be complete or timedout . Sample ClusterGroupUpgrade CR in the timedout state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z' 1 If a cluster's state is timedout , the currentPolicy field shows the name of the policy and the policy status. 2 The status for succeeded is false and the message indicates that policy remediation took too long. 19.10.5.7. Blocking ClusterGroupUpgrade CRs You can create multiple ClusterGroupUpgrade CRs and control their order of application. For example, if you create ClusterGroupUpgrade CR C that blocks the start of ClusterGroupUpgrade CR A, then ClusterGroupUpgrade CR A cannot start until the status of ClusterGroupUpgrade CR C becomes UpgradeComplete . One ClusterGroupUpgrade CR can have multiple blocking CRs. In this case, all the blocking CRs must complete before the upgrade for the current CR can start. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the content of the ClusterGroupUpgrade CRs in the cgu-a.yaml , cgu-b.yaml , and cgu-c.yaml files. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 1 Defines the blocking CRs. The cgu-a update cannot start until cgu-c is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 The cgu-b update cannot start until cgu-a is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {} 1 The cgu-c update does not have any blocking CRs. TALM starts the cgu-c update when the enable field is set to true . Create the ClusterGroupUpgrade CRs by running the following command for each relevant CR: USD oc apply -f <name>.yaml Start the update process by running the following command for each relevant CR: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> \ --type merge -p '{"spec":{"enable":true}}' The following examples show ClusterGroupUpgrade CRs where the enable field is set to true : Example for cgu-a with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {} 1 Shows the list of blocking CRs. Example for cgu-b with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 Shows the list of blocking CRs. Example for cgu-c with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0 1 The cgu-c update does not have any blocking CRs. 19.10.6. Update policies on managed clusters The Topology Aware Lifecycle Manager (TALM) remediates a set of inform policies for the clusters specified in the ClusterGroupUpgrade CR. TALM remediates inform policies by making enforce copies of the managed RHACM policies. Each copied policy has its own corresponding RHACM placement rule and RHACM placement binding. One by one, TALM adds each cluster from the current batch to the placement rule that corresponds with the applicable managed policy. If a cluster is already compliant with a policy, TALM skips applying that policy on the compliant cluster. TALM then moves on to applying the policy to the non-compliant cluster. After TALM completes the updates in a batch, all clusters are removed from the placement rules associated with the copied policies. Then, the update of the batch starts. If a spoke cluster does not report any compliant state to RHACM, the managed policies on the hub cluster can be missing status information that TALM needs. TALM handles these cases in the following ways: If a policy's status.compliant field is missing, TALM ignores the policy and adds a log entry. Then, TALM continues looking at the policy's status.status field. If a policy's status.status is missing, TALM produces an error. If a cluster's compliance status is missing in the policy's status.status field, TALM considers that cluster to be non-compliant with that policy. The ClusterGroupUpgrade CR's batchTimeoutAction determines what happens if an upgrade fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or specify abort to stop the policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. Example upgrade policy apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.13.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.13 desiredUpdate: version: 4.4.13.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.13.4 remediationAction: inform severity: low remediationAction: inform For more information about RHACM policies, see Policy overview . Additional resources For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD . 19.10.6.1. Configuring Operator subscriptions for managed clusters that you install with TALM Topology Aware Lifecycle Manager (TALM) can only approve the install plan for an Operator if the Subscription custom resource (CR) of the Operator contains the status.state.AtLatestKnown field. Procedure Add the status.state.AtLatestKnown field to the Subscription CR of the Operator: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1 1 The status.state: AtLatestKnown field is used for the latest Operator version available from the Operator catalog. Note When a new version of the Operator is available in the registry, the associated policy becomes non-compliant. Apply the changed Subscription policy to your managed clusters with a ClusterGroupUpgrade CR. 19.10.6.2. Applying update policies to managed clusters You can update your managed clusters by applying your policies. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the contents of the ClusterGroupUpgrade CR in the cgu-1.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5 1 The name of the policies to apply. 2 The list of clusters to update. 3 The maxConcurrency field signifies the number of clusters updated at the same time. 4 The update timeout in minutes. 5 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-1.yaml Check if the ClusterGroupUpgrade CR was created in the hub cluster by running the following command: USD oc get cgu --all-namespaces Example output NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled Check the status of the update by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Not enabled", 1 "reason": "NotEnabled", "status": "False", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": {} } 1 The spec.enable field in the ClusterGroupUpgrade CR is set to false . Check the status of the policies by running the following command: USD oc get policies -A Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m 1 The spec.remediationAction field of policies currently applied on the clusters is set to enforce . The managed policies in inform mode from the ClusterGroupUpgrade CR remain in inform mode during the update. Change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the update again by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ 1 { "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "All selected clusters are valid", "reason": "ClusterSelectionCompleted", "status": "True", "type": "ClustersSelected", "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "Completed validation", "reason": "ValidationCompleted", "status": "True", "type": "Validated", "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": { "currentBatch": 1, "currentBatchStartedAt": "2022-02-25T15:54:16Z", "remediationPlanForBatch": { "spoke1": 0, "spoke2": 1 }, "startedAt": "2022-02-25T15:54:16Z" } } 1 Reflects the update progress of the current batch. Run this command again to receive updated information about the progress. If the policies include Operator subscriptions, you can check the installation progress directly on the single-node cluster. Export the KUBECONFIG file of the single-node cluster you want to check the installation progress for by running the following command: USD export KUBECONFIG=<cluster_kubeconfig_absolute_path> Check all the subscriptions present on the single-node cluster and look for the one in the policy you are trying to install through the ClusterGroupUpgrade CR by running the following command: USD oc get subs -A | grep -i <subscription_name> Example output for cluster-logging policy NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable If one of the managed policies includes a ClusterVersion CR, check the status of platform updates in the current batch by running the following command against the spoke cluster: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.13.5 True True 43s Working towards 4.4.13.7: 71 of 735 done (9% complete) Check the Operator subscription by running the following command: USD oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath="{.status}" Check the install plans present on the single-node cluster that is associated with the desired subscription by running the following command: USD oc get installplan -n <subscription_namespace> Example output for cluster-logging Operator NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1 1 The install plans have their Approval field set to Manual and their Approved field changes from false to true after TALM approves the install plan. Note When TALM is remediating a policy containing a subscription, it automatically approves any install plans attached to that subscription. Where multiple install plans are needed to get the operator to the latest known version, TALM might approve multiple install plans, upgrading through one or more intermediate versions to get to the final version. Check if the cluster service version for the Operator of the policy that the ClusterGroupUpgrade is installing reached the Succeeded phase by running the following command: USD oc get csv -n <operator_namespace> Example output for OpenShift Logging Operator NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded 19.10.7. Creating a backup of cluster resources before upgrade For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) can create a backup of a deployment before an upgrade. If the upgrade fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a ClusterGroupUpgrade CR with the backup field set to true . To ensure that the contents of the backup are up to date, the backup is not taken until you set the enable field in the ClusterGroupUpgrade CR to true . TALM uses the BackupSucceeded condition to report the status and reasons as follows: true Backup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update does not proceed for that cluster. false Backup is still in progress for one or more clusters or has failed for all clusters. The backup process running in the spoke clusters can have the following statuses: PreparingToStart The first reconciliation pass is in progress. The TALM deletes any spoke backup namespace and hub view resources that have been created in a failed upgrade attempt. Starting The backup prerequisites and backup job are being created. Active The backup is in progress. Succeeded The backup succeeded. BackupTimeout Artifact backup is partially done. UnrecoverableError The backup has ended with a non-zero exit code. Note If the backup of a cluster fails and enters the BackupTimeout or UnrecoverableError state, the cluster update does not proceed for that cluster. Updates to other clusters are not affected and continue. 19.10.7.1. Creating a ClusterGroupUpgrade CR with backup You can create a backup of a deployment before an upgrade on single-node OpenShift clusters. If the upgrade fails you can use the upgrade-recovery.sh script generated by Topology Aware Lifecycle Manager (TALM) to return the system to its preupgrade state. The backup consists of the following items: Cluster backup A snapshot of etcd and static pod manifests. Content backup Backups of folders, for example, /etc , /usr/local , /var/lib/kubelet . Changed files backup Any files managed by machine-config that have been changed. Deployment A pinned ostree deployment. Images (Optional) Any container images that are in use. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Install Red Hat Advanced Cluster Management (RHACM). Note It is highly recommended that you create a recovery partition. The following is an example SiteConfig custom resource (CR) for a recovery partition of 50 GB: nodes: - hostName: "node-1.example.com" role: "master" rootDeviceHints: hctl: "0:2:0:0" deviceName: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 ... #Disk /dev/disk/by-id/scsi-3600508b400105e210000900000490000: #893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - mount_point: /var/recovery size: 51200 start: 800000 Procedure Save the contents of the ClusterGroupUpgrade CR with the backup and enable fields set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 To start the update, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check the status of the upgrade in the hub cluster by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "backup": { "clusters": [ "cnfdb2", "cnfdb1" ], "status": { "cnfdb1": "Succeeded", "cnfdb2": "Failed" 1 } }, "computedMaxConcurrency": 1, "conditions": [ { "lastTransitionTime": "2022-04-05T10:37:19Z", "message": "Backup failed for 1 cluster", 2 "reason": "PartiallyDone", 3 "status": "True", 4 "type": "Succeeded" } ], "precaching": { "spec": {} }, "status": {} 1 Backup has failed for one cluster. 2 The message confirms that the backup failed for one cluster. 3 The backup was partially successful. 4 The backup process has finished. 19.10.7.2. Recovering a cluster after a failed upgrade If an upgrade of a cluster fails, you can manually log in to the cluster and use the backup to return the cluster to its preupgrade state. There are two stages: Rollback If the attempted upgrade included a change to the platform OS deployment, you must roll back to the version before running the recovery script. Important A rollback is only applicable to upgrades from TALM and single-node OpenShift. This process does not apply to rollbacks from any other upgrade type. Recovery The recovery shuts down containers and uses files from the backup partition to relaunch containers and restore clusters. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Install Red Hat Advanced Cluster Management (RHACM). Log in as a user with cluster-admin privileges. Run an upgrade that is configured for backup. Procedure Delete the previously created ClusterGroupUpgrade custom resource (CR) by running the following command: USD oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno Log in to the cluster that you want to recover. Check the status of the platform OS deployment by running the following command: USD ostree admin status Example outputs [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9 1 The current deployment is pinned. A platform OS deployment rollback is not necessary. [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca 1 This platform OS deployment is marked for rollback. 2 The deployment is pinned and can be rolled back. To trigger a rollback of the platform OS deployment, run the following command: USD rpm-ostree rollback -r The first phase of the recovery shuts down containers and restores files from the backup partition to the targeted directories. To begin the recovery, run the following command: USD /var/recovery/upgrade-recovery.sh When prompted, reboot the cluster by running the following command: USD systemctl reboot After the reboot, restart the recovery by running the following command: USD /var/recovery/upgrade-recovery.sh --resume Note If the recovery utility fails, you can retry with the --restart option: USD /var/recovery/upgrade-recovery.sh --restart Verification To check the status of the recovery run the following command: USD oc get clusterversion,nodes,clusteroperator Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.13.23 True False 86d Cluster version is 4.4.13.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.13.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.13.23 True False False 86d .............. 1 The cluster version is available and has the correct version. 2 The node status is Ready . 3 The ClusterOperator object's availability is True . 19.10.8. Using the container image pre-cache feature Single-node OpenShift clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. Note The time of the update is not set by TALM. You can apply the ClusterGroupUpgrade CR at the beginning of the update by manual application or by external automation. The container image pre-caching starts when the preCaching field is set to true in the ClusterGroupUpgrade CR. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. After a successful pre-caching process, you can start remediating policies. The remediation actions start when the enable field is set to true . If there is a pre-caching failure on a cluster, the upgrade fails for that cluster. The upgrade process continues for all other clusters that have a successful pre-cache. The pre-caching process can be in the following statuses: NotStarted This is the initial state all clusters are automatically assigned to on the first reconciliation pass of the ClusterGroupUpgrade CR. In this state, TALM deletes any pre-caching namespace and hub view resources of spoke clusters that remain from incomplete updates. TALM then creates a new ManagedClusterView resource for the spoke pre-caching namespace to verify its deletion in the PrecachePreparing state. PreparingToStart Cleaning up any remaining resources from incomplete updates is in progress. Starting Pre-caching job prerequisites and the job are created. Active The job is in "Active" state. Succeeded The pre-cache job succeeded. PrecacheTimeout The artifact pre-caching is partially done. UnrecoverableError The job ends with a non-zero exit code. 19.10.8.1. Using the container image pre-cache filter The pre-cache feature typically downloads more images than a cluster needs for an update. You can control which pre-cache images are downloaded to a cluster. This decreases download time, and saves bandwidth and storage. You can see a list of all images to be downloaded using the following command: USD oc adm release info <ocp-version> The following ConfigMap example shows how you can exclude images using the excludePrecachePatterns field. apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba 1 TALM excludes all images with names that include any of the patterns listed here. 19.10.8.2. Creating a ClusterGroupUpgrade CR with pre-caching For single-node OpenShift, the pre-cache feature allows the required container images to be present on the spoke cluster before the update starts. Note For pre-caching, TALM uses the spec.remediationStrategy.timeout value from the ClusterGroupUpgrade CR. You must set a timeout value that allows sufficient time for the pre-caching job to complete. When you enable the ClusterGroupUpgrade CR after pre-caching has completed, you can change the timeout value to a duration that is appropriate for the update. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Procedure Save the contents of the ClusterGroupUpgrade CR with the preCaching field set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 1 The preCaching field is set to true , which enables TALM to pull the container images before starting the update. When you want to start pre-caching, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check if the ClusterGroupUpgrade CR exists in the hub cluster by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1 1 The CR is created. Check the status of the pre-caching task by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "conditions": [ { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is required and not done", "reason": "InProgress", "status": "False", "type": "PrecachingSucceeded" }, { "lastTransitionTime": "2022-01-27T19:07:34Z", "message": "Pre-caching spec is valid and consistent", "reason": "PrecacheSpecIsWellFormed", "status": "True", "type": "PrecacheSpecValid" } ], "precaching": { "clusters": [ "cnfdb1" 1 "cnfdb2" ], "spec": { "platformImage": "image.example.io"}, "status": { "cnfdb1": "Active" "cnfdb2": "Succeeded"} } } 1 Displays the list of identified clusters. Check the status of the pre-caching job by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache Example output NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s Check the status of the ClusterGroupUpgrade CR by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output "conditions": [ { "lastTransitionTime": "2022-01-27T19:30:41Z", "message": "The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies", "reason": "UpgradeCompleted", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:28:57Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingSucceeded" 1 } 1 The pre-cache tasks are done. 19.10.9. Troubleshooting the Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) is an OpenShift Container Platform Operator that remediates RHACM policies. When issues occur, use the oc adm must-gather command to gather details and logs and to take steps in debugging the issues. For more information about related topics, see the following documentation: Red Hat Advanced Cluster Management for Kubernetes 2.4 Support Matrix Red Hat Advanced Cluster Management Troubleshooting The "Troubleshooting Operator issues" section 19.10.9.1. General troubleshooting You can determine the cause of the problem by reviewing the following questions: Is the configuration that you are applying supported? Are the RHACM and the OpenShift Container Platform versions compatible? Are the TALM and RHACM versions compatible? Which of the following components is causing the problem? Section 19.10.9.3, "Managed policies" Section 19.10.9.4, "Clusters" Section 19.10.9.5, "Remediation Strategy" Section 19.10.9.6, "Topology Aware Lifecycle Manager" To ensure that the ClusterGroupUpgrade configuration is functional, you can do the following: Create the ClusterGroupUpgrade CR with the spec.enable field set to false . Wait for the status to be updated and go through the troubleshooting questions. If everything looks as expected, set the spec.enable field to true in the ClusterGroupUpgrade CR. Warning After you set the spec.enable field to true in the ClusterUpgradeGroup CR, the update procedure starts and you cannot edit the CR's spec fields anymore. 19.10.9.2. Cannot modify the ClusterUpgradeGroup CR Issue You cannot edit the ClusterUpgradeGroup CR after enabling the update. Resolution Restart the procedure by performing the following steps: Remove the old ClusterGroupUpgrade CR by running the following command: USD oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name> Check and fix the existing issues with the managed clusters and policies. Ensure that all the clusters are managed clusters and available. Ensure that all the policies exist and have the spec.remediationAction field set to inform . Create a new ClusterGroupUpgrade CR with the correct configurations. USD oc apply -f <ClusterGroupUpgradeCR_YAML> 19.10.9.3. Managed policies Checking managed policies on the system Issue You want to check if you have the correct managed policies on the system. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}' Example output ["group-du-sno-validator-du-validator-policy", "policy2-common-nto-sub-policy", "policy3-common-ptp-sub-policy"] Checking remediationAction mode Issue You want to check if the remediationAction field is set to inform in the spec of the managed policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h Checking policy compliance state Issue You want to check the compliance state of policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h 19.10.9.4. Clusters Checking if managed clusters are present Issue You want to check if the clusters in the ClusterGroupUpgrade CR are managed clusters. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h Alternatively, check the TALM manager logs: Get the name of the TALM manager by running the following command: USD oc get pod -n openshift-operators Example output NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m Check the TALM manager logs by running the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 The error message shows that the cluster is not a managed cluster. Checking if managed clusters are available Issue You want to check if the managed clusters specified in the ClusterGroupUpgrade CR are available. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2 1 2 The value of the AVAILABLE field is True for the managed clusters. Checking clusterLabelSelector Issue You want to check if the clusterLabelSelector field specified in the ClusterGroupUpgrade CR matches at least one of the managed clusters. Resolution Run the following command: USD oc get managedcluster --selector=upgrade=true 1 1 The label for the clusters you want to update is upgrade:true . Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Checking if canary clusters are present Issue You want to check if the canary clusters are present in the list of clusters. Example ClusterGroupUpgrade CR spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true Resolution Run the following commands: USD oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}' Example output ["spoke1", "spoke3"] Check if the canary clusters are present in the list of clusters that match clusterLabelSelector labels by running the following command: USD oc get managedcluster --selector=upgrade=true Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Note A cluster can be present in spec.clusters and also be matched by the spec.clusterLabelSelector label. Checking the pre-caching status on spoke clusters Check the status of pre-caching by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache 19.10.9.5. Remediation Strategy Checking if remediationStrategy is present in the ClusterGroupUpgrade CR Issue You want to check if the remediationStrategy is present in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}' Example output {"maxConcurrency":2, "timeout":240} Checking if maxConcurrency is specified in the ClusterGroupUpgrade CR Issue You want to check if the maxConcurrency is specified in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}' Example output 2 19.10.9.6. Topology Aware Lifecycle Manager Checking condition message and status in the ClusterGroupUpgrade CR Issue You want to check the value of the status.conditions field in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.conditions}' Example output {"lastTransitionTime":"2022-02-17T22:25:28Z", "message":"Missing managed policies:[policyList]", "reason":"NotAllManagedPoliciesExist", "status":"False", "type":"Validated"} Checking corresponding copied policies Issue You want to check if every policy from status.managedPoliciesForUpgrade has a corresponding policy in status.copiedPolicies . Resolution Run the following command: USD oc get cgu lab-upgrade -oyaml Example output status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default Checking if status.remediationPlan was computed Issue You want to check if status.remediationPlan is computed. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}' Example output [["spoke2", "spoke3"]] Errors in the TALM manager container Issue You want to check the logs of the manager container of TALM. Resolution Run the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 Displays the error. Clusters are not compliant to some policies after a ClusterGroupUpgrade CR has completed Issue The policy compliance status that TALM uses to decide if remediation is needed has not yet fully updated for all clusters. This may be because: The CGU was run too soon after a policy was created or updated. The remediation of a policy affects the compliance of subsequent policies in the ClusterGroupUpgrade CR. Resolution Create and apply a new ClusterGroupUpdate CR with the same specification. Auto-created ClusterGroupUpgrade CR in the GitOps ZTP workflow has no managed policies Issue If there are no policies for the managed cluster when the cluster becomes Ready , a ClusterGroupUpgrade CR with no policies is auto-created. Upon completion of the ClusterGroupUpgrade CR, the managed cluster is labeled as ztp-done . If the PolicyGenTemplate CRs were not pushed to the Git repository within the required time after SiteConfig resources were pushed, this might result in no policies being available for the target cluster when the cluster became Ready . Resolution Verify that the policies you want to apply are available on the hub cluster, then create a ClusterGroupUpgrade CR with the required policies. You can either manually create the ClusterGroupUpgrade CR or trigger auto-creation again. To trigger auto-creation of the ClusterGroupUpgrade CR, remove the ztp-done label from the cluster and delete the empty ClusterGroupUpgrade CR that was previously created in the zip-install namespace. Pre-caching has failed Issue Pre-caching might fail for one of the following reasons: There is not enough free space on the node. For a disconnected environment, the pre-cache image has not been properly mirrored. There was an issue when creating the pod. Resolution To check if pre-caching has failed due to insufficient space, check the log of the pre-caching pod in the node. Find the name of the pod using the following command: USD oc get pods -n openshift-talo-pre-cache Check the logs to see if the error is related to insufficient space using the following command: USD oc logs -n openshift-talo-pre-cache <pod name> If there is no log, check the pod status using the following command: USD oc describe pod -n openshift-talo-pre-cache <pod name> If the pod does not exist, check the job status to see why it could not create a pod using the following command: USD oc describe job -n openshift-talo-pre-cache pre-cache Additional resources For information about troubleshooting, see OpenShift Container Platform Troubleshooting Operator Issues . For more information about using Topology Aware Lifecycle Manager in the ZTP workflow, see Updating managed policies with Topology Aware Lifecycle Manager . For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD 19.11. Updating managed clusters in a disconnected environment with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of OpenShift Container Platform managed clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. Additional resources For more information about the Topology Aware Lifecycle Manager, see About the Topology Aware Lifecycle Manager . 19.11.1. Updating clusters in a disconnected environment You can upgrade managed clusters and Operators for managed clusters that you have deployed using GitOps Zero Touch Provisioning (ZTP) and Topology Aware Lifecycle Manager (TALM). 19.11.1.1. Setting up the environment TALM can perform both platform and Operator updates. You must mirror both the platform image and Operator images that you want to update to in your mirror registry before you can use TALM to update your disconnected clusters. Complete the following steps to mirror the images: For platform updates, you must perform the following steps: Mirror the desired OpenShift Container Platform image repository. Ensure that the desired platform image is mirrored by following the "Mirroring the OpenShift Container Platform image repository" procedure linked in the Additional resources. Save the contents of the imageContentSources section in the imageContentSources.yaml file: Example output imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Save the image signature of the desired platform image that was mirrored. You must add the image signature to the PolicyGenTemplate CR for platform updates. To get the image signature, perform the following steps: Specify the desired OpenShift Container Platform tag by running the following command: USD OCP_RELEASE_NUMBER=<release_version> Specify the architecture of the cluster by running the following command: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Get the release image digest from Quay by running the following command USD DIGEST="USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')" Set the digest algorithm by running the following command: USD DIGEST_ALGO="USD{DIGEST%%:*}" Set the digest signature by running the following command: USD DIGEST_ENCODED="USD{DIGEST#*:}" Get the image signature from the mirror.openshift.com website by running the following command: USD SIGNATURE_BASE64=USD(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1" | base64 -w0 && echo) Save the image signature to the checksum-<OCP_RELEASE_NUMBER>.yaml file by running the following commands: USD cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF Prepare the update graph. You have two options to prepare the update graph: Use the OpenShift Update Service. For more information about how to set up the graph on the hub cluster, see Deploy the operator for OpenShift Update Service and Build the graph data init container . Make a local copy of the upstream graph. Host the update graph on an http or https server in the disconnected environment that has access to the managed cluster. To download the update graph, use the following command: USD curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.13 -o ~/upgrade-graph_stable-4.13 For Operator updates, you must perform the following task: Mirror the Operator catalogs. Ensure that the desired operator images are mirrored by following the procedure in the "Mirroring Operator catalogs for use with disconnected clusters" section. Additional resources For more information about how to update GitOps Zero Touch Provisioning (ZTP), see Upgrading GitOps ZTP . For more information about how to mirror an OpenShift Container Platform image repository, see Mirroring the OpenShift Container Platform image repository . For more information about how to mirror Operator catalogs for disconnected clusters, see Mirroring Operator catalogs for use with disconnected clusters . For more information about how to prepare the disconnected environment and mirroring the desired image repository, see Preparing the disconnected environment . For more information about update channels and releases, see Understanding update channels and releases . 19.11.1.2. Performing a platform update You can perform a platform update with the TALM. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update GitOps Zero Touch Provisioning (ZTP) to the latest version. Provision one or more managed clusters with GitOps ZTP. Mirror the desired image repository. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Create a PolicyGenTemplate CR for the platform update: Save the following contents of the PolicyGenTemplate CR in the du-upgrade.yaml file. Example of PolicyGenTemplate for platform update apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: "platform-upgrade-prep" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: "platform-upgrade-prep" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: "platform-upgrade" metadata: name: version spec: channel: "stable-4.13" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.13 desiredUpdate: version: 4.13.4 status: history: - version: 4.13.4 state: "Completed" 1 The ConfigMap CR contains the signature of the desired release image to update to. 2 Shows the image signature of the desired OpenShift Container Platform release. Get the signature from the checksum-USD{OCP_RELEASE_NUMBER}.yaml file you saved when following the procedures in the "Setting up the environment" section. 3 Shows the mirror repository that contains the desired OpenShift Container Platform image. Get the mirrors from the imageContentSources.yaml file that you saved when following the procedures in the "Setting up the environment" section. 4 Shows the ClusterVersion CR to trigger the update. The channel , upstream , and desiredVersion fields are all required for image pre-caching. The PolicyGenTemplate CR generates two policies: The du-upgrade-platform-upgrade-prep policy does the preparation work for the platform update. It creates the ConfigMap CR for the desired release image signature, creates the image content source of the mirrored release image repository, and updates the cluster version with the desired update channel and the update graph reachable by the managed cluster in the disconnected environment. The du-upgrade-platform-upgrade policy is used to perform platform upgrade. Add the du-upgrade.yaml file contents to the kustomization.yaml file located in the GitOps ZTP Git repository for the PolicyGenTemplate CRs and push the changes to the Git repository. ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster. Check the created policies by running the following command: USD oc get policies -A | grep platform-upgrade Create the ClusterGroupUpdate CR for the platform update with the spec.enable field set to false . Save the content of the platform update ClusterGroupUpdate CR with the du-upgrade-platform-upgrade-prep and the du-upgrade-platform-upgrade policies and the target clusters to the cgu-platform-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false Apply the ClusterGroupUpdate CR to the hub cluster by running the following command: USD oc apply -f cgu-platform-upgrade.yml Optional: Pre-cache the images for the platform update. Enable pre-caching in the ClusterGroupUpdate CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the hub cluster: USD oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}' Start the platform update: Enable the cgu-platform-upgrade policy and disable pre-caching by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Additional resources For more information about mirroring the images in a disconnected environment, see Preparing the disconnected environment . 19.11.1.3. Performing an Operator update You can perform an Operator update with the TALM. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update GitOps Zero Touch Provisioning (ZTP) to the latest version. Provision one or more managed clusters with GitOps ZTP. Mirror the desired index image, bundle images, and all Operator images referenced in the bundle images. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Update the PolicyGenTemplate CR for the Operator update. Update the du-upgrade PolicyGenTemplate CR with the following additional contents in the du-upgrade.yaml file: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.13 1 updateStrategy: 2 registryPoll: interval: 1h 1 The index image URL contains the desired Operator images. If the index images are always pushed to the same image name and tag, this change is not needed. 2 Set how frequently the Operator Lifecycle Manager (OLM) polls the index image for new Operator versions with the registryPoll.interval field. This change is not needed if a new index image tag is always pushed for y-stream and z-stream Operator updates. The registryPoll.interval field can be set to a shorter interval to expedite the update, however shorter intervals increase computational load. To counteract this, you can restore registryPoll.interval to the default value once the update is complete. This update generates one policy, du-upgrade-operator-catsrc-policy , to update the redhat-operators-disconnected catalog source with the new index images that contain the desired Operators images. Note If you want to use the image pre-caching for Operators and there are Operators from a different catalog source other than redhat-operators-disconnected , you must perform the following tasks: Prepare a separate catalog source policy with the new index image or registry poll interval update for the different catalog source. Prepare a separate subscription policy for the desired Operators that are from the different catalog source. For example, the desired SRIOV-FEC Operator is available in the certified-operators catalog source. To update the catalog source and the Operator subscription, add the following contents to generate two policies, du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "fec-catsrc-policy" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: "subscriptions-fec-policy" spec: channel: "stable" source: certified-operators Remove the specified subscriptions channels in the common PolicyGenTemplate CR, if they exist. The default subscriptions channels from the GitOps ZTP image are used for the update. Note The default channel for the Operators applied through GitOps ZTP 4.13 is stable , except for the performance-addon-operator . As of OpenShift Container Platform 4.11, the performance-addon-operator functionality was moved to the node-tuning-operator . For the 4.10 release, the default channel for PAO is v4.10 . You can also specify the default channels in the common PolicyGenTemplate CR. Push the PolicyGenTemplate CRs updates to the GitOps ZTP Git repository. ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster. Check the created policies by running the following command: USD oc get policies -A | grep -E "catsrc-policy|subscription" Apply the required catalog source updates before starting the Operator update. Save the content of the ClusterGroupUpgrade CR named operator-upgrade-prep with the catalog source policies and the target managed clusters to the cgu-operator-upgrade-prep.yml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1 Apply the policy to the hub cluster by running the following command: USD oc apply -f cgu-operator-upgrade-prep.yml Monitor the update process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies -A | grep -E "catsrc-policy" Create the ClusterGroupUpgrade CR for the Operator update with the spec.enable field set to false . Save the content of the Operator update ClusterGroupUpgrade CR with the du-upgrade-operator-catsrc-policy policy and the subscription policies created from the common PolicyGenTemplate and the target clusters to the cgu-operator-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false 1 The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source. 2 The policy contains Operator subscriptions. If you have followed the structure and content of the reference PolicyGenTemplates , all Operator subscriptions are grouped into the common-subscriptions-policy policy. Note One ClusterGroupUpgrade CR can only pre-cache the images of the desired Operators defined in the subscription policy from one catalog source included in the ClusterGroupUpgrade CR. If the desired Operators are from different catalog sources, such as in the example of the SRIOV-FEC Operator, another ClusterGroupUpgrade CR must be created with du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy policies for the SRIOV-FEC Operator images pre-caching and update. Apply the ClusterGroupUpgrade CR to the hub cluster by running the following command: USD oc apply -f cgu-operator-upgrade.yml Optional: Pre-cache the images for the Operator update. Before starting image pre-caching, verify the subscription policy is NonCompliant at this point by running the following command: USD oc get policy common-subscriptions-policy -n <policy_namespace> Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d Enable pre-caching in the ClusterGroupUpgrade CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster: USD oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}' Check if the pre-caching is completed before starting the update by running the following command: USD oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq Example output [ { "lastTransitionTime": "2022-03-08T20:49:08.000Z", "message": "The ClusterGroupUpgrade CR is not enabled", "reason": "UpgradeNotStarted", "status": "False", "type": "Ready" }, { "lastTransitionTime": "2022-03-08T20:55:30.000Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingDone" } ] Start the Operator update. Enable the cgu-operator-upgrade ClusterGroupUpgrade CR and disable pre-caching to start the Operator update by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Additional resources For more information about updating GitOps ZTP, see Upgrading GitOps ZTP . Troubleshooting missed Operator updates due to out-of-date policy compliance states . 19.11.1.3.1. Troubleshooting missed Operator updates due to out-of-date policy compliance states In some scenarios, Topology Aware Lifecycle Manager (TALM) might miss Operator updates due to an out-of-date policy compliance state. After a catalog source update, it takes time for the Operator Lifecycle Manager (OLM) to update the subscription status. The status of the subscription policy might continue to show as compliant while TALM decides whether remediation is needed. As a result, the Operator specified in the subscription policy does not get upgraded. To avoid this scenario, add another catalog source configuration to the PolicyGenTemplate and specify this configuration in the subscription for any Operators that require an update. Procedure Add a catalog source configuration in the PolicyGenTemplate resource: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY 1 Update the name for the new configuration. 2 Update the display name for the new configuration. 3 Update the index image URL. This fileName.spec.image field overrides any configuration in the DefaultCatsrc.yaml file. Update the Subscription resource to point to the new configuration for Operators that require an update: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace # ... spec: source: redhat-operators-disconnected-v2 1 # ... 1 Enter the name of the additional catalog source configuration that you defined in the PolicyGenTemplate resource. 19.11.1.4. Performing a platform and an Operator update together You can perform a platform and an Operator update at the same time. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Update GitOps Zero Touch Provisioning (ZTP) to the latest version. Provision one or more managed clusters with GitOps ZTP. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Create the PolicyGenTemplate CR for the updates by following the steps described in the "Performing a platform update" and "Performing an Operator update" sections. Apply the prep work for the platform and the Operator update. Save the content of the ClusterGroupUpgrade CR with the policies for platform update preparation work, catalog source updates, and target clusters to the cgu-platform-operator-upgrade-prep.yml file, for example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true Apply the cgu-platform-operator-upgrade-prep.yml file to the hub cluster by running the following command: USD oc apply -f cgu-platform-operator-upgrade-prep.yml Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Create the ClusterGroupUpdate CR for the platform and the Operator update with the spec.enable field set to false . Save the contents of the platform and Operator update ClusterGroupUpdate CR with the policies and the target clusters to the cgu-platform-operator-upgrade.yml file, as shown in the following example: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false 1 This is the platform update policy. 2 This is the policy containing the catalog source information for the Operators to be updated. It is needed for the pre-caching feature to determine which Operator images to download to the managed cluster. 3 This is the policy to update the Operators. Apply the cgu-platform-operator-upgrade.yml file to the hub cluster by running the following command: USD oc apply -f cgu-platform-operator-upgrade.yml Optional: Pre-cache the images for the platform and the Operator update. Enable pre-caching in the ClusterGroupUpgrade CR by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster: USD oc get jobs,pods -n openshift-talm-pre-cache Check if the pre-caching is completed before starting the update by running the following command: USD oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}' Start the platform and Operator update. Enable the cgu-du-upgrade ClusterGroupUpgrade CR to start the platform and the Operator update by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge Monitor the process. Upon completion, ensure that the policy is compliant by running the following command: USD oc get policies --all-namespaces Note The CRs for the platform and Operator updates can be created from the beginning by configuring the setting to spec.enable: true . In this case, the update starts immediately after pre-caching completes and there is no need to manually enable the CR. Both pre-caching and the update create extra resources, such as policies, placement bindings, placement rules, managed cluster actions, and managed cluster view, to help complete the procedures. Setting the afterCompletion.deleteObjects field to true deletes all these resources after the updates complete. 19.11.1.5. Removing Performance Addon Operator subscriptions from deployed clusters In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 or later, these functions are part of the Node Tuning Operator. Do not install the Performance Addon Operator on clusters running OpenShift Container Platform 4.11 or later. If you upgrade to OpenShift Container Platform 4.11 or later, the Node Tuning Operator automatically removes the Performance Addon Operator. Note You need to remove any policies that create Performance Addon Operator subscriptions to prevent a re-installation of the Operator. The reference DU profile includes the Performance Addon Operator in the PolicyGenTemplate CR common-ranGen.yaml . To remove the subscription from deployed managed clusters, you must update common-ranGen.yaml . Note If you install Performance Addon Operator 4.10.3-5 or later on OpenShift Container Platform 4.11 or later, the Performance Addon Operator detects the cluster version and automatically hibernates to avoid interfering with the Node Tuning Operator functions. However, to ensure best performance, remove the Performance Addon Operator from your OpenShift Container Platform 4.11 clusters. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for ArgoCD. Update to OpenShift Container Platform 4.11 or later. Log in as a user with cluster-admin privileges. Procedure Change the complianceType to mustnothave for the Performance Addon Operator namespace, Operator group, and subscription in the common-ranGen.yaml file. - fileName: PaoSubscriptionNS.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: "subscriptions-policy" complianceType: mustnothave Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The status of the common-subscriptions-policy policy changes to Non-Compliant . Apply the change to your target clusters by using the Topology Aware Lifecycle Manager. For more information about rolling out configuration changes, see the "Additional resources" section. Monitor the process. When the status of the common-subscriptions-policy policy for a target cluster is Compliant , the Performance Addon Operator has been removed from the cluster. Get the status of the common-subscriptions-policy by running the following command: USD oc get policy -n ztp-common common-subscriptions-policy Delete the Performance Addon Operator namespace, Operator group and subscription CRs from .spec.sourceFiles in the common-ranGen.yaml file. Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The policy remains compliant. Additional resources For more information about the TALM pre-caching workflow, see Using the container image pre-cache feature . 19.11.2. About the auto-created ClusterGroupUpgrade CR for GitOps ZTP TALM has a controller called ManagedClusterForCGU that monitors the Ready state of the ManagedCluster CRs on the hub cluster and creates the ClusterGroupUpgrade CRs for GitOps Zero Touch Provisioning (ZTP). For any managed cluster in the Ready state without a ztp-done label applied, the ManagedClusterForCGU controller automatically creates a ClusterGroupUpgrade CR in the ztp-install namespace with its associated RHACM policies that are created during the GitOps ZTP process. TALM then remediates the set of configuration policies that are listed in the auto-created ClusterGroupUpgrade CR to push the configuration CRs to the managed cluster. If there are no policies for the managed cluster at the time when the cluster becomes Ready , a ClusterGroupUpgrade CR with no policies is created. Upon completion of the ClusterGroupUpgrade the managed cluster is labeled as ztp-done . If there are policies that you want to apply for that managed cluster, manually create a ClusterGroupUpgrade as a day-2 operation. Example of an auto-created ClusterGroupUpgrade CR for GitOps ZTP apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: "46666836" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: "" 1 deleteClusterLabels: ztp-running: "" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: "" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240 1 Applied to the managed cluster when TALM completes the cluster configuration. 2 Applied to the managed cluster when TALM starts deploying the configuration policies. 19.12. Updating GitOps ZTP You can update the GitOps Zero Touch Provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters. Note You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements. 19.12.1. Overview of the GitOps ZTP update process You can update GitOps Zero Touch Provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters. Note Any changes to policy settings, including adding recommended content, results in updated polices that must be rolled out to the managed clusters and reconciled. At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows: Label all existing clusters with the ztp-done label. Stop the ArgoCD applications. Install the new GitOps ZTP tools. Update required content and optional changes in the Git repository. Update and restart the application configuration. 19.12.2. Preparing for the upgrade Use the following procedure to prepare your site for the GitOps Zero Touch Provisioning (ZTP) upgrade. Procedure Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP. Extract the argocd/deployment directory by using the following commands: USD mkdir -p ./update USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./update The /update directory contains the following subdirectories: update/extra-manifest : contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap . update/source-crs : contains the source CR files that the PolicyGenTemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. update/argocd/deployment : contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. update/argocd/example : contains example SiteConfig and PolicyGenTemplate files that represent the recommended configuration. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository. If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository. Important When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files. 19.12.3. Labeling the existing clusters To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label. Note This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done . Procedure Find a label selector that lists the managed clusters that were deployed with GitOps Zero Touch Provisioning (ZTP), such as local-cluster!=true : USD oc get managedcluster -l 'local-cluster!=true' Ensure that the resulting list contains all the managed clusters that were deployed with GitOps ZTP, and then use that selector to add the ztp-done label: USD oc label managedcluster -l 'local-cluster!=true' ztp-done= 19.12.4. Stopping the existing GitOps ZTP applications Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available. Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first. Procedure Perform a non-cascaded delete on the clusters application to leave all generated resources in place: USD oc delete -f update/argocd/deployment/clusters-app.yaml Perform a cascaded delete on the policies application to remove all policies: USD oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge USD oc delete -f update/argocd/deployment/policies-app.yaml 19.12.5. Required changes to the Git repository When upgrading the ztp-site-generate container from an earlier release of GitOps Zero Touch Provisioning (ZTP) to 4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes. Make required changes to PolicyGenTemplate files: All PolicyGenTemplate files must be created in a Namespace prefixed with ztp . This ensures that the GitOps ZTP application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally. Add the kustomization.yaml file to the repository: All SiteConfig and PolicyGenTemplate CRs must be included in a kustomization.yaml file under their respective directory trees. For example: ├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml Note The files listed in the generator sections must contain either SiteConfig or PolicyGenTemplate CRs only. If your existing YAML files contain other CRs, for example, Namespace , these other CRs must be pulled out into separate files and listed in the resources section. The PolicyGenTemplate kustomization file must contain all PolicyGenTemplate YAML files in the generator section and Namespace CRs in the resources section. For example: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml Remove the pre-sync.yaml and post-sync.yaml files. In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster. Note There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and PolicyGenTemplate trees. Review and incorporate recommended changes Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform. Review the reference SiteConfig and PolicyGenTemplate CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container. 19.12.6. Installing the new GitOps ZTP applications Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured. Procedure To patch the ArgoCD instance in the hub cluster by using the patch file that you previously extracted into the update/argocd/deployment/ directory, enter the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file update/argocd/deployment/argocd-openshift-gitops-patch.json To apply the contents of the argocd/deployment directory, enter the following command: USD oc apply -k update/argocd/deployment 19.12.7. Rolling out the GitOps ZTP configuration changes If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the GitOps Zero Touch Provisioning (ZTP) version 4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently. To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update. Additional resources For information about the Topology Aware Lifecycle Manager (TALM), see About the Topology Aware Lifecycle Manager configuration . For information about creating ClusterGroupUpgrade CRs, see About the auto-created ClusterGroupUpgrade CR for ZTP . 19.13. Expanding single-node OpenShift clusters with GitOps ZTP You can expand single-node OpenShift clusters with GitOps Zero Touch Provisioning (ZTP). When you add worker nodes to single-node OpenShift clusters, the original single-node OpenShift cluster retains the control plane node role. Adding worker nodes does not require any downtime for the existing single-node OpenShift cluster. Note Although there is no specified limit on the number of worker nodes that you can add to a single-node OpenShift cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes. If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning MachineConfig objects are rendered and associated with the worker machine config pool before the GitOps ZTP workflow applies the MachineConfig ignition file to the worker node. It is recommended that you first remediate the policies, and then install the worker node. If you create the workload partitioning manifests after installing the worker node, you must drain the node manually and delete all the pods managed by daemon sets. When the managing daemon sets create the new pods, the new pods undergo the workload partitioning process. Important Adding worker nodes to single-node OpenShift clusters with GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources For more information about single-node OpenShift clusters tuned for vDU application deployments, see Reference configuration for deploying vDUs on single-node OpenShift . For more information about worker nodes, see Adding worker nodes to single-node OpenShift clusters . For information about removing a worker node from an expanded single-node OpenShift cluster, see Removing managed cluster nodes by using the command line interface . 19.13.1. Applying profiles to the worker node You can configure the additional worker node with a DU profile. You can apply a RAN distributed unit (DU) profile to the worker node cluster using the GitOps Zero Touch Provisioning (ZTP) common, group, and site-specific PolicyGenTemplate resources. The GitOps ZTP pipeline that is linked to the ArgoCD policies application includes the following CRs that you can find in the out/argocd/example/policygentemplates folder when you extract the ztp-site-generate container: common-ranGen.yaml group-du-sno-ranGen.yaml example-sno-site.yaml ns.yaml kustomization.yaml Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a ClusterGroupUpgrade CR to reconcile the policies in the group of clusters. 19.13.2. (Optional) Ensuring PTP and SR-IOV daemon selector compatibility If the DU profile was deployed using the GitOps Zero Touch Provisioning (ZTP) plugin version 4.11 or earlier, the PTP and SR-IOV Operators might be configured to place the daemons only on nodes labelled as master . This configuration prevents the PTP and SR-IOV daemons from operating on the worker node. If the PTP and SR-IOV daemon node selectors are incorrectly configured on your system, you must change the daemons before proceeding with the worker DU profile configuration. Procedure Check the daemon node selector settings of the PTP Operator on one of the spoke clusters: USD oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq Example output for PTP Operator {"daemonNodeSelector":{"node-role.kubernetes.io/master":""}} 1 1 If the node selector is set to master , the spoke was deployed with the version of the GitOps ZTP plugin that requires changes. Check the daemon node selector settings of the SR-IOV Operator on one of the spoke clusters: USD oc get sriovoperatorconfig/default -n \ openshift-sriov-network-operator -ojsonpath='{.spec}' | jq Example output for SR-IOV Operator {"configDaemonNodeSelector":{"node-role.kubernetes.io/worker":""},"disableDrain":false,"enableInjector":true,"enableOperatorWebhook":true} 1 1 If the node selector is set to master , the spoke was deployed with the version of the GitOps ZTP plugin that requires changes. In the group policy, add the following complianceType and spec entries: spec: - fileName: PtpOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" Important Changing the daemonNodeSelector field causes temporary PTP synchronization loss and SR-IOV connectivity loss. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. 19.13.3. PTP and SR-IOV node selector compatibility The PTP configuration resources and SR-IOV network node policies use node-role.kubernetes.io/master: "" as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the "node-role.kubernetes.io/worker" label. 19.13.4. Using PolicyGenTemplate CRs to apply worker node policies to worker nodes You can create policies for worker nodes. Procedure Create the following policy template: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-sno-workers" namespace: "example-sno" spec: bindingRules: sites: "example-sno" 1 mcp: "worker" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: "config-policy" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: "4-47" reserved: "0-3" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: "config-policy" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker 1 The policies are applied to all clusters with this label. 2 The MCP field must be set to worker . 3 This generic MachineConfig CR is used to configure workload partitioning on the worker node. 4 The cpu.isolated and cpu.reserved fields must be configured for each particular hardware platform. 5 The cmdline_crash CPU set must match the cpu.isolated set in the PerformanceProfile section. A generic MachineConfig CR is used to configure workload partitioning on the worker node. You can generate the content of crio and kubelet configuration files. Add the created policy template to the Git repository monitored by the ArgoCD policies application. Add the policy in the kustomization.yaml file. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. To remediate the new policies to your spoke cluster, create a TALM custom resource: USD cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF 19.13.5. Adding worker nodes to single-node OpenShift clusters with GitOps ZTP You can add one or more worker nodes to existing single-node OpenShift clusters to increase available CPU resources in the cluster. Prerequisites Install and configure RHACM 2.6 or later in an OpenShift Container Platform 4.11 or later bare-metal hub cluster Install Topology Aware Lifecycle Manager in the hub cluster Install Red Hat OpenShift GitOps in the hub cluster Use the GitOps ZTP ztp-site-generate container image version 4.12 or later Deploy a managed single-node OpenShift cluster with GitOps ZTP Configure the Central Infrastructure Management as described in the RHACM documentation Configure the DNS serving the cluster to resolve the internal API endpoint api-int.<cluster_name>.<base_domain> Procedure If you deployed your cluster by using the example-sno.yaml SiteConfig manifest, add your new worker node to the spec.clusters['example-sno'].nodes list: nodes: - hostName: "example-node2.example.com" role: "worker" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node2-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up macAddress: "AA:BB:CC:DD:EE:11" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Create a BMC authentication secret for the new host, as referenced by the bmcCredentialsName field in the spec.nodes section of your SiteConfig file: apiVersion: v1 data: password: "password" username: "username" kind: Secret metadata: name: "example-node2-bmh-secret" namespace: example-sno type: Opaque Commit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application. When the ArgoCD cluster application synchronizes, two new manifests appear on the hub cluster generated by the GitOps ZTP plugin: BareMetalHost NMStateConfig Important The cpuset field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete. Verification You can monitor the installation process in several ways. Check if the preprovisioning images are created by running the following command: USD oc get ppimg -n example-sno Example output NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated Check the state of the bare-metal hosts: USD oc get bmh -n example-sno Example output NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1 1 The provisioning state indicates that node booting from the installation media is in progress. Continuously monitor the installation process: Watch the agent install process by running the following command: USD oc get agent -n example-sno --watch Example output NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done When the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the ManagedClusterInfo status. Run the following command to see the status: USD oc get managedclusterinfo/example-sno -n example-sno -o \ jsonpath='{range .status.nodeList[*]}{.name}{"\t"}{.conditions}{"\t"}{.labels}{"\n"}{end}' Example output example-sno [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/master":"","node-role.kubernetes.io/worker":""} example-node2 [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/worker":""} 19.14. Pre-caching images for single-node OpenShift deployments In environments with limited bandwidth where you use the GitOps Zero Touch Provisioning (ZTP) solution to deploy a large number of clusters, you want to avoid downloading all the images that are required for bootstrapping and installing OpenShift Container Platform. The limited bandwidth at remote single-node OpenShift sites can cause long deployment times. The factory-precaching-cli tool allows you to pre-stage servers before shipping them to the remote site for ZTP provisioning. The factory-precaching-cli tool does the following: Downloads the RHCOS rootfs image that is required by the minimal ISO to boot. Creates a partition from the installation disk labelled as data . Formats the disk in xfs. Creates a GUID Partition Table (GPT) data partition at the end of the disk, where the size of the partition is configurable by the tool. Copies the container images required to install OpenShift Container Platform. Copies the container images required by ZTP to install OpenShift Container Platform. Optional: Copies Day-2 Operators to the partition. Important The factory-precaching-cli tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 19.14.1. Getting the factory-precaching-cli tool The factory-precaching-cli tool Go binary is publicly available in the Telco RAN tools container image . The factory-precaching-cli tool Go binary in the container image is executed on the server running an RHCOS live image using podman . If you are working in a disconnected environment or have a private registry, you need to copy the image there so you can download the image to the server. Procedure Pull the factory-precaching-cli tool image by running the following command: # podman pull quay.io/openshift-kni/telco-ran-tools:latest Verification To check that the tool is available, query the current version of the factory-precaching-cli tool Go binary: # podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v Example output factory-precaching-cli version 20221018.120852+main.feecf17 19.14.2. Booting from a live operating system image You can use the factory-precaching-cli tool with to boot servers where only one disk is available and external disk drive cannot be attached to the server. Warning RHCOS requires the disk to not be in use when the disk is about to be written with an RHCOS image. Depending on the server hardware, you can mount the RHCOS live ISO on the blank server using one of the following methods: Using the Dell RACADM tool on a Dell server. Using the HPONCFG tool on a HP server. Using the Redfish BMC API. Note It is recommended to automate the mounting procedure. To automate the procedure, you need to pull the required images and host them on a local HTTP server. Prerequisites You powered up the host. You have network connectivity to the host. Procedure This example procedure uses the Redfish BMC API to mount the RHCOS live ISO. Mount the RHCOS live ISO: Check virtual media status: USD curl --globoff -H "Content-Type: application/json" -H \ "Accept: application/json" -k -X GET --user USD{username_password} \ https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool Mount the ISO file as a virtual media: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Image": "http://[USDHTTPd_IP]/RHCOS-live.iso"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia Set the boot order to boot from the virtual media once: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Boot":{ "BootSourceOverrideEnabled": "Once", "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self Reboot and ensure that the server is booting from virtual media. Additional resources For more information about the butane utility, see About Butane . For more information about creating a custom live RHCOS ISO, see Creating a custom live RHCOS ISO for remote server access . For more information about using the Dell RACADM tool, see Integrated Dell Remote Access Controller 9 RACADM CLI Guide . For more information about using the HP HPONCFG tool, see Using HPONCFG . For more information about using the Redfish BMC API, see Booting from an HTTP-hosted ISO image using the Redfish API . 19.14.3. Partitioning the disk To run the full pre-caching process, you have to boot from a live ISO and use the factory-precaching-cli tool from a container image to partition and pre-cache all the artifacts required. A live ISO or RHCOS live ISO is required because the disk must not be in use when the operating system (RHCOS) is written to the device during the provisioning. Single-disk servers can also be enabled with this procedure. Prerequisites You have a disk that is not partitioned. You have access to the quay.io/openshift-kni/telco-ran-tools:latest image. You have enough storage to install OpenShift Container Platform and pre-cache the required images. Procedure Verify that the disk is cleared: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk Erase any file system, RAID or partition table signatures from the device: # wipefs -a /dev/nvme0n1 Example output /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa Important The tool fails if the disk is not empty because it uses partition number 1 of the device for pre-caching the artifacts. 19.14.3.1. Creating the partition Once the device is ready, you create a single partition and a GPT partition table. The partition is automatically labelled as data and created at the end of the device. Otherwise, the partition will be overridden by the coreos-installer . Important The coreos-installer requires the partition to be created at the end of the device and to be labelled as data . Both requirements are necessary to save the partition when writing the RHCOS image to the disk. Prerequisites The container must run as privileged due to formatting host devices. You have to mount the /dev folder so that the process can be executed inside the container. Procedure In the following example, the size of the partition is 250 GiB due to allow pre-caching the DU profile for Day 2 Operators. Run the container as privileged and partition the disk: # podman run -v /dev:/dev --privileged \ --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli partition \ 1 -d /dev/nvme0n1 \ 2 -s 250 3 1 Specifies the partitioning function of the factory-precaching-cli tool. 2 Defines the root directory on the disk. 3 Defines the size of the disk in GB. Check the storage information: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part Verification You must verify that the following requirements are met: The device has a GPT partition table The partition uses the latest sectors of the device. The partition is correctly labeled as data . Query the disk status to verify that the disk is partitioned as expected: # gdisk -l /dev/nvme0n1 Example output GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data 19.14.3.2. Mounting the partition After verifying that the disk is partitioned correctly, you can mount the device into /mnt . Important It is recommended to mount the device into /mnt because that mounting point is used during GitOps ZTP preparation. Verify that the partition is formatted as xfs : # lsblk -f /dev/nvme0n1 Example output NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071 Mount the partition: # mount /dev/nvme0n1p1 /mnt/ Verification Check that the partition is mounted: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1 1 The mount point is /var/mnt because the /mnt folder in RHCOS is a link to /var/mnt . 19.14.4. Downloading the images The factory-precaching-cli tool allows you to download the following images to your partitioned server: OpenShift Container Platform images Operator images that are included in the distributed unit (DU) profile for 5G RAN sites Operator images from disconnected registries Note The list of available Operator images can vary in different OpenShift Container Platform releases. 19.14.4.1. Downloading with parallel workers The factory-precaching-cli tool uses parallel workers to download multiple images simultaneously. You can configure the number of workers with the --parallel or -p option. The default number is set to 80% of the available CPUs to the server. Note Your login shell may be restricted to a subset of CPUs, which reduces the CPUs available to the container. To remove this restriction, you can precede your commands with taskset 0xffffffff , for example: # taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help 19.14.4.2. Preparing to download the OpenShift Container Platform images To download OpenShift Container Platform container images, you need to know the multicluster engine version. When you use the --du-profile flag, you also need to specify the Red Hat Advanced Cluster Management (RHACM) version running in the hub cluster that is going to provision the single-node OpenShift. Prerequisites You have RHACM and the multicluster engine Operator installed. You partitioned the storage device. You have enough space for the images on the partitioned device. You connected the bare-metal server to the Internet. You have a valid pull secret. Procedure Check the RHACM version and the multicluster engine version by running the following commands in the hub cluster: USD oc get csv -A | grep -i advanced-cluster-management Example output open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded USD oc get csv -A | grep -i multicluster-engine Example output multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded To access the container registry, copy a valid pull secret on the server to be installed: Create the .docker folder: USD mkdir /root/.docker Copy the valid pull in the config.json file to the previously created .docker/ folder: USD cp config.json /root/.docker/config.json 1 1 /root/.docker/config.json is the default path where podman checks for the login credentials for the registry. Note If you use a different registry to pull the required artifacts, you need to copy the proper pull secret. If the local registry uses TLS, you need to include the certificates from the registry as well. 19.14.4.3. Downloading the OpenShift Container Platform images The factory-precaching-cli tool allows you to pre-cache all the container images required to provision a specific OpenShift Container Platform release. Procedure Pre-cache the release by running the following command: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- \ factory-precaching-cli download \ 1 -r 4.13.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf ... Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f ... Summary: Release: 4.13.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83 Verification Check that all the images are compressed in the target folder of server: USD ls -l /mnt 1 1 It is recommended that you pre-cache the images in the /mnt folder. Example output -rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz 19.14.4.4. Downloading the Operator images You can also pre-cache Day-2 Operators used in the 5G Radio Access Network (RAN) Distributed Unit (DU) cluster configuration. The Day-2 Operators depend on the installed OpenShift Container Platform version. Important You need to include the RHACM hub and multicluster engine Operator versions by using the --acm-version and --mce-version flags so the factory-precaching-cli tool can pre-cache the appropriate containers images for RHACM and the multicluster engine Operator. Procedure Pre-cache the Operator images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.13.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s 7 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 ... Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 ... Summary: Release: 4.13.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83 19.14.4.5. Pre-caching custom images in disconnected environments The --generate-imageset argument stops the factory-precaching-cli tool after the ImageSetConfiguration custom resource (CR) is generated. This allows you to customize the ImageSetConfiguration CR before downloading any images. After you customized the CR, you can use the --skip-imageset argument to download the images that you specified in the ImageSetConfiguration CR. You can customize the ImageSetConfiguration CR in the following ways: Add Operators and additional images Remove Operators and additional images Change Operator and catalog sources to local or disconnected registries Procedure Pre-cache the images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.13.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --generate-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --generate-imageset argument generates the ImageSetConfiguration CR only, which allows you to customize the CR. Example output Generated /mnt/imageset.yaml Example ImageSetConfiguration CR apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.13 minVersion: 4.13.0 1 maxVersion: 4.13.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.13' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.13 packages: - name: sriov-fec 11 channels: - name: 'stable' 1 The platform versions match the versions passed to the tool. 2 3 The versions of RHACM and the multicluster engine Operator match the versions passed to the tool. 4 5 6 7 8 9 10 11 The CR contains all the specified DU Operators. Customize the catalog resource in the CR: apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.13 packages: - name: sriov-fec channels: - name: 'stable' When you download images by using a local or disconnected registry, you have to first add certificates for the registries that you want to pull the content from. To avoid any errors, copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Then, update the certificates trust store: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download \ 1 -r 4.13.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --skip-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --skip-imageset argument allows you to download the images that you specified in your customized ImageSetConfiguration CR. Download the images without generating a new imageSetConfiguration CR: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.13.0 \ --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt \ --img quay.io/custom/repository \ --du-profile -s \ --skip-imageset Additional resources To access the online Red Hat registries, see OpenShift installation customization tools . For more information about using the multicluster engine, see About cluster lifecycle with the multicluster engine operator . 19.14.5. Pre-caching images in GitOps ZTP The SiteConfig manifest defines how an OpenShift cluster is to be installed and configured. In the GitOps Zero Touch Provisioning (ZTP) provisioning workflow, the factory-precaching-cli tool requires the following additional fields in the SiteConfig manifest: clusters.ignitionConfigOverride nodes.installerArgs nodes.ignitionConfigOverride Example SiteConfig with additional fields apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-5g-lab" namespace: "example-5g-lab" spec: baseDomain: "example.domain.redhat.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "img4.9.10-x86-64-appsub" 1 sshPublicKey: "ssh-rsa ..." clusters: - clusterName: "sno-worker-0" clusterImageSetNameRef: "eko4-img4.11.5-x86-64-appsub" 2 clusterLabels: group-du-sno: "" common-411: true sites : "example-5g-lab" vendor: "OpenShift" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: "OVNKubernetes" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-images.service\nBindsTo=precache-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-images.service" }, { "name": "precache-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached images in discovery stage\nAfter=var-mnt.mount\nBefore=agent.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ai.sh\n#TimeoutStopSec=30\n\n[Install]\nWantedBy=multi-user.target default.target\nWantedBy=agent.service" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ai.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200" } }, { "overwrite": true, "path": "/usr/local/bin/agent-fix-bz1964591", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true" } } ] } }' nodes: - hostName: "snonode.sno-worker-0.example.domain.redhat.com" role: "master" bmcAddress: "idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "worker0-bmh-secret" bootMACAddress: "e4:43:4b:bd:90:46" bootMode: "UEFI" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '["--save-partlabel", "data"]' ignitionConfigOverride: | { "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-ocp-images.service\nBindsTo=precache-ocp-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-ocp-images.service" }, { "name": "precache-ocp-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached OCP images into containers storage\nAfter=var-mnt.mount\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ocp.sh\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ocp.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: "AA:BB:CC:11:22:33" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "ens1f0" macAddress: "AA:BB:CC:11:22:33" 1 Specifies the cluster image set used for deployment, unless you specify a different image set in the spec.clusters.clusterImageSetNameRef field. 2 Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at the site level. 19.14.5.1. Understanding the clusters.ignitionConfigOverride field The clusters.ignitionConfigOverride field adds a configuration in Ignition format during the GitOps ZTP discovery stage. The configuration includes systemd services in the ISO mounted in virtual media. This way, the scripts are part of the discovery RHCOS live ISO and they can be used to load the Assisted Installer (AI) images. systemd services The systemd services are var-mnt.mount and precache-images.services . The precache-images.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The service calls a script called extract-ai.sh . extract-ai.sh The extract-ai.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. agent-fix-bz1964591 The agent-fix-bz1964591 script is a workaround for an AI issue. To prevent AI from removing the images, which can force the agent.service to pull the images again from the registry, the agent-fix-bz1964591 script checks if the requested container images exist. 19.14.5.2. Understanding the nodes.installerArgs field The nodes.installerArgs field allows you to configure how the coreos-installer utility writes the RHCOS live ISO to disk. You need to indicate to save the disk partition labeled as data because the artifacts saved in the data partition are needed during the OpenShift Container Platform installation stage. The extra parameters are passed directly to the coreos-installer utility that writes the live RHCOS to disk. On the reboot, the operating system starts from the disk. You can pass several options to the coreos-installer utility: OPTIONS: ... -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL ... --save-partlabel <lx>... Save partitions with this label glob --save-partindex <id>... Save partitions with this number or range ... --insecure-ignition Allow Ignition URL without HTTPS or hash 19.14.5.3. Understanding the nodes.ignitionConfigOverride field Similarly to clusters.ignitionConfigOverride , the nodes.ignitionConfigOverride field allows the addition of configurations in Ignition format to the coreos-installer utility, but at the OpenShift Container Platform installation stage. When the RHCOS is written to disk, the extra configuration included in the GitOps ZTP discovery ISO is no longer available. During the discovery stage, the extra configuration is stored in the memory of the live OS. Note At this stage, the number of container images extracted and loaded is bigger than in the discovery stage. Depending on the OpenShift Container Platform release and whether you install the Day-2 Operators, the installation time can vary. At the installation stage, the var-mnt.mount and precache-ocp.services systemd services are used. precache-ocp.service The precache-ocp.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The precache-ocp.service service calls a script called extract-ocp.sh . Important To extract all the images before the OpenShift Container Platform installation, you must execute precache-ocp.service before executing the machine-config-daemon-pull.service and nodeip-configuration.service services. extract-ocp.sh The extract-ocp.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. When you upload the SiteConfig and the optional PolicyGenTemplates custom resources (CRs) to the Git repo, which Argo CD is monitoring, you can start the GitOps ZTP workflow by syncing the CRs with the hub cluster. 19.14.6. Troubleshooting 19.14.6.1. Rendered catalog is invalid When you download images by using a local or disconnected registry, you might see the The rendered catalog is invalid error. This means that you are missing certificates of the new registry you want to pull content from. Note The factory-precaching-cli tool image is built on a UBI RHEL image. Certificate paths and locations are the same on RHCOS. Example error Generating list of pre-cached artifacts... error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying host error=failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information. error: error rendering new refs: render reference "eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11": error resolving name : failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority Procedure Copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Update the certificates truststore: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download -r 4.13.0 --acm-version 2.5.4 \ --mce-version 2.0.4 -f /mnt \--img quay.io/custom/repository --du-profile -s --skip-imageset
[ "export ISO_IMAGE_NAME=<iso_image_name> 1", "export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1", "export OCP_VERSION=<ocp_version> 1", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}", "wget http://USD(hostname)/USD{ISO_IMAGE_NAME}", "Saving to: rhcos-4.13.1-x86_64-live.x86_64.iso rhcos-4.13.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s", "oc edit AgentServiceConfig", "- cpuArchitecture: x86_64 openshiftVersion: \"4.13\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso", "apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/example-repository\" 4 mirror-by-digest-only = true [[registry.mirror]] location = \"mirror1.registry.corp.com:5000/example-repository\" 5", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> url: <iso_url> 3", "oc edit AgentServiceConfig agent", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com", "oc debug node/<node_name>", "sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry>", "Login Succeeded!", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment", "podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./out", "example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "grep -r \"ztp-deploy-wave\" out/source-crs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"", "~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml", "clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "export CLUSTERNS=example-sno", "oc create namespace USDCLUSTERNS", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" cpuPartitioningMode: AllNodes pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"marketplace\", \"NodeTuning\" ] } } clusterLabels: common: true group-du-sno: \"\" sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" rootDeviceHints: wwn: \"0x11111000000asd123\" # diskPartition: # - device: /dev/disk/by-id/wwn-0x11111000000asd123 # match rootDeviceHints # partitions: # - mount_point: /var/imageregistry # size: 102500 # start: 344844 ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x11111000000asd123\", \"wipeTable\": false, \"partitions\": [ { \"sizeMiB\": 16, \"label\": \"httpevent1\", \"startMiB\": 350000 }, { \"sizeMiB\": 16, \"label\": \"httpevent2\", \"startMiB\": 350016 } ] } ], \"filesystem\": [ { \"device\": \"/dev/disk/by-partlabel/httpevent1\", \"format\": \"xfs\", \"wipeFilesystem\": true }, { \"device\": \"/dev/disk/by-partlabel/httpevent2\", \"format\": \"xfs\", \"wipeFilesystem\": true } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "export CLUSTER=<clusterName>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get AgentClusterInstall -n <cluster_name>", "oc get managedcluster", "oc describe -n openshift-gitops application clusters", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'", "oc delete policy -n <namespace> <policy_name>", "oc delete -k out/argocd/deployment", "--- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"common\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: SriovOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: PtpOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogNS.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogSubscription.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: StorageNS.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: StorageSubscription.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ReduceMonitoringFootprint.yaml policyName: \"config-policy\" - fileName: OperatorHub.yaml 3 policyName: \"config-policy\" - fileName: DefaultCatsrc.yaml 4 policyName: \"config-policy\" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: \"config-policy\" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"ztp-group\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -n 24\"", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq", "{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" }", "oc get policies -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m", "export NS=<namespace>", "oc get policy -n USDNS", "oc describe -n openshift-gitops application policies", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error", "oc get policy -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d", "oc get placementrule -n USDNS", "oc get placementrule -n USDNS <placementRuleName> -o yaml", "oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq", "oc get policy -n USDCLUSTER", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'", "oc delete clustergroupupgrades -n ztp-install USDCLUSTER", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true", "- fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:", "oc create -f cgu-remove.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get <kind> <changed_cr_name>", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h", "oc get <kind> <changed_cr_name>", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./out", "out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "mkdir -p ./site-install", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" cpuPartitioningMode: AllNodes pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"marketplace\", \"NodeTuning\" ] } } clusterLabels: common: true group-du-sno: \"\" sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" rootDeviceHints: wwn: \"0x11111000000asd123\" # diskPartition: # - device: /dev/disk/by-id/wwn-0x11111000000asd123 # match rootDeviceHints # partitions: # - mount_point: /var/imageregistry # size: 102500 # start: 344844 ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x11111000000asd123\", \"wipeTable\": false, \"partitions\": [ { \"sizeMiB\": 16, \"label\": \"httpevent1\", \"startMiB\": 350000 }, { \"sizeMiB\": 16, \"label\": \"httpevent2\", \"startMiB\": 350016 } ] } ], \"filesystem\": [ { \"device\": \"/dev/disk/by-partlabel/httpevent1\", \"format\": \"xfs\", \"wipeFilesystem\": true }, { \"device\": \"/dev/disk/by-partlabel/httpevent2\", \"format\": \"xfs\", \"wipeFilesystem\": true } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 generator install site-1-sno.yaml /output", "site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml", "mkdir -p ./site-machineconfig", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 generator install -E site-1-sno.yaml /output", "site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml", "mkdir -p ./ref", "podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 generator config -N . /output", "ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.13.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.13.0-x86_64 2", "oc apply -f clusterImageSet-4.13.yaml", "apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2", "oc apply -f cluster-namespace.yaml", "oc apply -R ./site-install/site-sno-1", "oc get managedcluster", "oc get agent -n <cluster_name>", "oc describe agent -n <cluster_name>", "oc get agentclusterinstall -n <cluster_name>", "oc describe agentclusterinstall -n <cluster_name>", "oc get managedclusteraddon -n <cluster_name>", "oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig", "oc get managedcluster", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h", "oc get clusterdeployment -n <cluster_name>", "NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h", "oc describe agentclusterinstall -n <cluster_name> <cluster_name>", "oc delete managedcluster <cluster_name>", "oc delete namespace <cluster_name>", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" cpuPartitioningMode: AllNodes 1", "oc debug node/example-sno-1", "sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done", "pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53", "sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done", "pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 04-accelerated-container-startup-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,#!/bin/bash
#
# Temporarily reset the core system processes's CPU affinity to be unrestricted to accelerate startup and shutdown
#
# The defaults below can be overridden via environment variables
#

# The default set of critical processes whose affinity should be temporarily unbound:
CRITICAL_PROCESSES=${CRITICAL_PROCESSES:-"crio kubelet NetworkManager conmon dbus"}

# Default wait time is 600s = 10m:
MAXIMUM_WAIT_TIME=${MAXIMUM_WAIT_TIME:-600}

# Default steady-state threshold = 2%
# Allowed values:
#  4  - absolute pod count (+/-)
#  4% - percent change (+/-)
#  -1 - disable the steady-state check
STEADY_STATE_THRESHOLD=${STEADY_STATE_THRESHOLD:-2%}

# Default steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
STEADY_STATE_WINDOW=${STEADY_STATE_WINDOW:-60}

# Default steady-state allows any pod count to be "steady state"
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
STEADY_STATE_MINIMUM=${STEADY_STATE_MINIMUM:-0}

#######################################################

KUBELET_CPU_STATE=/var/lib/kubelet/cpu_manager_state
FULL_CPU_STATE=/sys/fs/cgroup/cpuset/cpuset.cpus
KUBELET_CONF=/etc/kubernetes/kubelet.conf
unrestrictedCpuset() {
  local cpus
  if [[ -e $KUBELET_CPU_STATE ]]; then
    cpus=$(jq -r '.defaultCpuSet' <$KUBELET_CPU_STATE)
    if [[ -n "${cpus}" && -e ${KUBELET_CONF} ]]; then
      reserved_cpus=$(jq -r '.reservedSystemCPUs' </etc/kubernetes/kubelet.conf)
      if [[ -n "${reserved_cpus}" ]]; then
        # Use taskset to merge the two cpusets
        cpus=$(taskset -c "${reserved_cpus},${cpus}" grep -i Cpus_allowed_list /proc/self/status | awk '{print $2}')
      fi
    fi
  fi
  if [[ -z $cpus ]]; then
    # fall back to using all cpus if the kubelet state is not configured yet
    [[ -e $FULL_CPU_STATE ]] || return 1
    cpus=$(<$FULL_CPU_STATE)
  fi
  echo $cpus
}

restrictedCpuset() {
  for arg in $(</proc/cmdline); do
    if [[ $arg =~ ^systemd.cpu_affinity= ]]; then
      echo ${arg#*=}
      return 0
    fi
  done
  return 1
}

resetAffinity() {
  local cpuset="$1"
  local failcount=0
  local successcount=0
  logger "Recovery: Setting CPU affinity for critical processes \"$CRITICAL_PROCESSES\" to $cpuset"
  for proc in $CRITICAL_PROCESSES; do
    local pids="$(pgrep $proc)"
    for pid in $pids; do
      local tasksetOutput
      tasksetOutput="$(taskset -apc "$cpuset" $pid 2>&1)"
      if [[ $? -ne 0 ]]; then
        echo "ERROR: $tasksetOutput"
        ((failcount++))
      else
        ((successcount++))
      fi
    done
  done

  logger "Recovery: Re-affined $successcount pids successfully"
  if [[ $failcount -gt 0 ]]; then
    logger "Recovery: Failed to re-affine $failcount processes"
    return 1
  fi
}

setUnrestricted() {
  logger "Recovery: Setting critical system processes to have unrestricted CPU access"
  resetAffinity "$(unrestrictedCpuset)"
}

setRestricted() {
  logger "Recovery: Resetting critical system processes back to normally restricted access"
  resetAffinity "$(restrictedCpuset)"
}

currentAffinity() {
  local pid="$1"
  taskset -pc $pid | awk -F': ' '{print $2}'
}

within() {
  local last=$1 current=$2 threshold=$3
  local delta=0 pchange
  delta=$(( current - last ))
  if [[ $current -eq $last ]]; then
    pchange=0
  elif [[ $last -eq 0 ]]; then
    pchange=1000000
  else
    pchange=$(( ( $delta * 100) / last ))
  fi
  echo -n "last:$last current:$current delta:$delta pchange:${pchange}%: "
  local absolute limit
  case $threshold in
    *%)
      absolute=${pchange##-} # absolute value
      limit=${threshold%%%}
      ;;
    *)
      absolute=${delta##-} # absolute value
      limit=$threshold
      ;;
  esac
  if [[ $absolute -le $limit ]]; then
    echo "within (+/-)$threshold"
    return 0
  else
    echo "outside (+/-)$threshold"
    return 1
  fi
}

steadystate() {
  local last=$1 current=$2
  if [[ $last -lt $STEADY_STATE_MINIMUM ]]; then
    echo "last:$last current:$current Waiting to reach $STEADY_STATE_MINIMUM before checking for steady-state"
    return 1
  fi
  within $last $current $STEADY_STATE_THRESHOLD
}

waitForReady() {
  logger "Recovery: Waiting ${MAXIMUM_WAIT_TIME}s for the initialization to complete"
  local lastSystemdCpuset="$(currentAffinity 1)"
  local lastDesiredCpuset="$(unrestrictedCpuset)"
  local t=0 s=10
  local lastCcount=0 ccount=0 steadyStateTime=0
  while [[ $t -lt $MAXIMUM_WAIT_TIME ]]; do
    sleep $s
    ((t += s))
    # Re-check the current affinity of systemd, in case some other process has changed it
    local systemdCpuset="$(currentAffinity 1)"
    # Re-check the unrestricted Cpuset, as the allowed set of unreserved cores may change as pods are assigned to cores
    local desiredCpuset="$(unrestrictedCpuset)"
    if [[ $systemdCpuset != $lastSystemdCpuset || $lastDesiredCpuset != $desiredCpuset ]]; then
      resetAffinity "$desiredCpuset"
      lastSystemdCpuset="$(currentAffinity 1)"
      lastDesiredCpuset="$desiredCpuset"
    fi

    # Detect steady-state pod count
    ccount=$(crictl ps | wc -l)
    if steadystate $lastCcount $ccount; then
      ((steadyStateTime += s))
      echo "Steady-state for ${steadyStateTime}s/${STEADY_STATE_WINDOW}s"
      if [[ $steadyStateTime -ge $STEADY_STATE_WINDOW ]]; then
        logger "Recovery: Steady-state (+/- $STEADY_STATE_THRESHOLD) for ${STEADY_STATE_WINDOW}s: Done"
        return 0
      fi
    else
      if [[ $steadyStateTime -gt 0 ]]; then
        echo "Resetting steady-state timer"
        steadyStateTime=0
      fi
    fi
    lastCcount=$ccount
  done
  logger "Recovery: Recovery Complete Timeout"
}

main() {
  if ! unrestrictedCpuset >&/dev/null; then
    logger "Recovery: No unrestricted Cpuset could be detected"
    return 1
  fi

  if ! restrictedCpuset >&/dev/null; then
    logger "Recovery: No restricted Cpuset has been configured.  We are already running unrestricted."
    return 0
  fi

  # Ensure we reset the CPU affinity when we exit this script for any reason
  # This way either after the timer expires or after the process is interrupted
  # via ^C or SIGTERM, we return things back to the way they should be.
  trap setRestricted EXIT

  logger "Recovery: Recovery Mode Starting"
  setUnrestricted
  waitForReady
}

if [[ "${BASH_SOURCE[0]}" = "${0}" ]]; then
  main "${@}"
  exit $?
fi
 mode: 493 path: /usr/local/bin/accelerated-container-startup.sh systemd: units: - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container startup [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: accelerated-container-startup.service - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container shutdown DefaultDependencies=no [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=-1 # Steady-state window = 60s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=60 [Install] WantedBy=shutdown.target reboot.target halt.target enabled: true name: accelerated-container-shutdown.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"stable\" name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: \"Managed\" curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.46.55.190:9092/test inputs: - name: infra-logs infrastructure: {} pipelines: - name: audit-logs inputRefs: - audit outputRefs: - kafka-open - name: infrastructure-logs inputRefs: - infrastructure outputRefs: - kafka-open", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"module_blacklist=irdma\" cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G node: 0 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network.service [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: ens5f0 ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"master\" priority: 19 profile: performance-patch", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/master\": \"\" enableInjector: true enableOperatorWebhook: true", "containers: - name: my-sriov-workload-container resources: limits: openshift.io/<resource_name>: \"1\" requests: openshift.io/<resource_name>: \"1\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator spec: resourceName: \"du_mh\" networkNamespace: openshift-sriov-network-operator vlan: \"150\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: # Attributes for Mellanox/Intel based NICs deviceType: netdevice/vfio-pci isRdma: true/false nicSelector: # The exact physical function name must match the hardware used pfNames: [ens7f0] nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 priority: 10 resourceName: du_mh", "installConfigOverrides: \"{\\\"capabilities\\\":{\\\"baselineCapabilitySet\\\": \\\"None\\\" }}\"", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: ran.openshift.io/ztp-deploy-wave: \"1\" data: config.yaml: | alertmanagerMain: enabled: false prometheusK8s: retention: 24h", "apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: odf-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 deviceSelector: paths: - /usr/disk/by-path/pci-0000:11:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: disableNetworkDiagnostics: true", "spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"module_blacklist=irdma\"", "spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name, for example: # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from the [sysctl] section data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* [service] service.stalld=start,enable service.chronyd=stop,disable", "OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}')", "DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64)", "podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc debug node/<node_name>", "sh-4.4# uname -r", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc get operatorhub cluster -o yaml", "spec: disableAllDefaultSources: true", "oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.target\\.workload\\.openshift\\.io/management}{\"\\n\"}{end}'", "certified-operators -- {\"effect\": \"PreferredDuringScheduling\"} community-operators -- {\"effect\": \"PreferredDuringScheduling\"} ran-operators 1 redhat-marketplace -- {\"effect\": \"PreferredDuringScheduling\"} redhat-operators -- {\"effect\": \"PreferredDuringScheduling\"}", "oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.workload\\.openshift\\.io/allowed}{\"\\n\"}{end}'", "default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management", "oc get -n openshift-logging ClusterLogForwarder instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: \"2022-07-19T21:51:41Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"1030342\" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open", "oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: \"2022-07-07T18:22:56Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"235796\" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed", "oc get consoles.operator.openshift.io cluster -o jsonpath=\"{ .spec.managementState }\"", "Removed", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# systemctl status chronyd", "● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)", "PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'", "sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00", "oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container", "phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533", "oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath=\"{.spec.disableDrain}{'\\n'}\"", "true", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath=\"{.items[*].status.syncStatus}{'\\n'}\"", "Succeeded", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml", "apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState status: interfaces: - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: \"8086\" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: \"8086\" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: \"8086\" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: \"8086\" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: \"8086\" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: \"8086\" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: \"8086\" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: \"8086\" vfID: 7", "oc get PerformanceProfile openshift-node-performance-profile -o yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: \"2022-07-19T21:51:31Z\" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: \"33558\" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Available - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Upgradeable - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Progressing - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile", "oc get performanceprofile openshift-node-performance-profile -o jsonpath=\"{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\\n'}{end}\"", "Available -- True Upgradeable -- True Progressing -- False Degraded -- False", "oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: \"2022-07-18T10:33:52Z\" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: \"34024\" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch", "oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'", "true", "oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION", "Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\"", "oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath=\"{ .data.config\\.yaml }\"", "grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h", "oc get route -n openshift-monitoring alertmanager-main", "oc get route -n openshift-monitoring grafana", "oc get performanceprofile -o jsonpath=\"{ .items[0].spec.cpu.reserved }\"", "0-3", "siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml └── extra-manifest └── 01-example-machine-config.yaml", "clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifestPath: extra-manifest", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.13\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml", "- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude", "clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml", "siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13.1 extract /home/ztp --tar | tar x -C ./out", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"", "example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-dev\" namespace: \"ztp-clusters\" spec: bindingRules: dev: \"true\" mcp: \"master\" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: \"group-dev-cluster-log-ns\" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: \"group-dev-cluster-log-operator-group\" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: \"group-dev-cluster-log-sub\" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: \"group-dev-lso-ns\" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: \"group-dev-lso-operator-group\" - fileName: StorageSubscription.yaml remediationAction: inform policyName: \"group-dev-lso-sub\" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: \"group-dev-pao-ns\" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: \"group-dev-pao-cat-source\" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: \"group-dev-pao-sub\" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: \"group-dev-elasticsearch-ns\" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: \"group-dev-elasticsearch-operator-group\" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: \"group-dev-apiserver-config\" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: \"group-dev-disable-nic-lldp\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgu-test.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies", "spec: evaluationInterval: compliant: 30m noncompliant: 20s", "spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: \"sriov-sub-policy\" evaluationInterval: compliant: never noncompliant: 10s", "oc get pods -n open-cluster-management-agent-addon", "NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d", "oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb", "2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: [...] spec: [...] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true [...] additionalKernelArgs: - [...] - \"cpufreq.default_governor=schedutil\" 1", "oc get nodes", "oc debug node/<node-name>", "chroot /host", "cat /proc/cmdline", "- fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" spec: profile: - name: performance-patch data: | [...] [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1", "- fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.13 policyName: subscription-policies", "- fileName: StorageLVMCluster.yaml policyName: \"lvmo-config\" 1 spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs", "#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: \"amqp://amq-router.amq-router.svc.cluster.local\"", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: HardwareEvent.yaml 1 policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043\" logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "- fileName: HardwareEvent.yaml policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "oc debug node/my-sno-node", "chroot /host", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"", "cluster=<managed_cluster_name>", "oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster", "oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster", "oc get image.config.openshift.io cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice", "oc get pv image-registry-sc", "oc get pods -n openshift-image-registry | grep registry*", "cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d", "oc debug node/sno-1.example.com", "sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom", "argocd.argoproj.io/sync-options: Replace=true", "{{hub fromConfigMap \"default\" \"test-config\" \"common-key\" hub}}", "{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) hub}}", "{{hub fromConfigMap \"default\" \"test-config\" (printf \"%s-name\" .ManagedClusterName) | toBool hub}}", "{{hub (printf \"%s-name\" .ManagedClusterName) | fromConfigMap \"default\" \"test-config\" | toInt hub}}", "apiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: example-sno-du_fh-numVfs: \"8\" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: \"10\" example-sno-du_fh-vlan: \"140\" example-sno-du_mh-numVfs: \"8\" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: \"10\" example-sno-du_mh-vlan: \"150\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"site\" namespace: \"ztp-site\" spec: remediationAction: inform bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-pf\" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_fh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-vlan\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-pf\" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-numVfs\" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap \"ztp-site\" \"sriovdata\" (printf \"%s-du_mh-priority\" .ManagedClusterName) | toInt hub}}' resourceName: du_mh", "apiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: site-1-vlan: \"101\" site-2-vlan: \"234\"", "- fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data\" (printf \"%s-vlan\" .ManagedClusterName) | toInt hub}}'", "oc delete policy <policy_name> -n <policy_namespace>", "oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"", "oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgr-example.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f talm-subscription.yaml", "oc get csv -n openshift-operators", "NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.13.x Topology Aware Lifecycle Manager 4.13.x Succeeded", "oc get deploy -n openshift-operators", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s", "spec remediationStrategy: maxConcurrency: 1 timeout: 240", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: \"\" deleteClusterLabels: upgrade-running: \"\" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: \"\" backup: false clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status:", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: \"False\" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: \"True\" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} backup: false clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}", "oc apply -f <name>.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.13.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.13 desiredUpdate: version: 4.4.13.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.13.4 remediationAction: inform severity: low remediationAction: inform", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5", "oc create -f cgu-1.yaml", "oc get cgu --all-namespaces", "NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Not enabled\", 1 \"reason\": \"NotEnabled\", \"status\": \"False\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }", "oc get policies -A", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"All selected clusters are valid\", \"reason\": \"ClusterSelectionCompleted\", \"status\": \"True\", \"type\": \"ClustersSelected\", \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"Completed validation\", \"reason\": \"ValidationCompleted\", \"status\": \"True\", \"type\": \"Validated\", \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"remediationPlanForBatch\": { \"spoke1\": 0, \"spoke2\": 1 }, \"startedAt\": \"2022-02-25T15:54:16Z\" } }", "export KUBECONFIG=<cluster_kubeconfig_absolute_path>", "oc get subs -A | grep -i <subscription_name>", "NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.13.5 True True 43s Working towards 4.4.13.7: 71 of 735 done (9% complete)", "oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"", "oc get installplan -n <subscription_namespace>", "NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1", "oc get csv -n <operator_namespace>", "NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded", "nodes: - hostName: \"node-1.example.com\" role: \"master\" rootDeviceHints: hctl: \"0:2:0:0\" deviceName: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 #Disk /dev/disk/by-id/scsi-3600508b400105e210000900000490000: #893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - mount_point: /var/recovery size: 51200 start: 800000", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"backup\": { \"clusters\": [ \"cnfdb2\", \"cnfdb1\" ], \"status\": { \"cnfdb1\": \"Succeeded\", \"cnfdb2\": \"Failed\" 1 } }, \"computedMaxConcurrency\": 1, \"conditions\": [ { \"lastTransitionTime\": \"2022-04-05T10:37:19Z\", \"message\": \"Backup failed for 1 cluster\", 2 \"reason\": \"PartiallyDone\", 3 \"status\": \"True\", 4 \"type\": \"Succeeded\" } ], \"precaching\": { \"spec\": {} }, \"status\": {}", "oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno", "ostree admin status", "ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9", "ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca", "rpm-ostree rollback -r", "/var/recovery/upgrade-recovery.sh", "systemctl reboot", "/var/recovery/upgrade-recovery.sh --resume", "/var/recovery/upgrade-recovery.sh --restart", "oc get clusterversion,nodes,clusteroperator", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.13.23 True False 86d Cluster version is 4.4.13.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.13.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.4.13.23 True False False 86d ...........", "oc adm release info <ocp-version>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"InProgress\", \"status\": \"False\", \"type\": \"PrecachingSucceeded\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 1 \"cnfdb2\" ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\" \"cnfdb2\": \"Succeeded\"} } }", "oc get jobs,pods -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingSucceeded\" 1 }", "oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>", "oc apply -f <ClusterGroupUpgradeCR_YAML>", "oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'", "[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h", "oc get pod -n openshift-operators", "NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2", "oc get managedcluster --selector=upgrade=true 1", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true", "oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'", "[\"spoke1\", \"spoke3\"]", "oc get managedcluster --selector=upgrade=true", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "oc get jobs,pods -n openshift-talo-pre-cache", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'", "{\"maxConcurrency\":2, \"timeout\":240}", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'", "2", "oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'", "{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"Missing managed policies:[policyList]\", \"reason\":\"NotAllManagedPoliciesExist\", \"status\":\"False\", \"type\":\"Validated\"}", "oc get cgu lab-upgrade -oyaml", "status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default", "oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'", "[[\"spoke2\", \"spoke3\"]]", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get pods -n openshift-talo-pre-cache", "oc logs -n openshift-talo-pre-cache <pod name>", "oc describe pod -n openshift-talo-pre-cache <pod name>", "oc describe job -n openshift-talo-pre-cache pre-cache", "imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "OCP_RELEASE_NUMBER=<release_version>", "ARCHITECTURE=<cluster_architecture> 1", "DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"", "DIGEST_ALGO=\"USD{DIGEST%%:*}\"", "DIGEST_ENCODED=\"USD{DIGEST#*:}\"", "SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)", "cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF", "curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.13 -o ~/upgrade-graph_stable-4.13", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: \"platform-upgrade-prep\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: \"platform-upgrade-prep\" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: \"platform-upgrade\" metadata: name: version spec: channel: \"stable-4.13\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.13 desiredUpdate: version: 4.13.4 status: history: - version: 4.13.4 state: \"Completed\"", "oc get policies -A | grep platform-upgrade", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.13 1 updateStrategy: 2 registryPoll: interval: 1h", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"fec-catsrc-policy\" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: \"subscriptions-fec-policy\" spec: channel: \"stable\" source: certified-operators", "oc get policies -A | grep -E \"catsrc-policy|subscription\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1", "oc apply -f cgu-operator-upgrade-prep.yml", "oc get policies -A | grep -E \"catsrc-policy\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-operator-upgrade.yml", "oc get policy common-subscriptions-policy -n <policy_namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'", "oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq", "[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace spec: source: redhat-operators-disconnected-v2 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true", "oc apply -f cgu-platform-operator-upgrade-prep.yml", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-operator-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get jobs,pods -n openshift-talm-pre-cache", "oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: PaoSubscriptionNS.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave", "oc get policy -n ztp-common common-subscriptions-policy", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240", "mkdir -p ./update", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.13 extract /home/ztp --tar | tar x -C ./update", "oc get managedcluster -l 'local-cluster!=true'", "oc label managedcluster -l 'local-cluster!=true' ztp-done=", "oc delete -f update/argocd/deployment/clusters-app.yaml", "oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge", "oc delete -f update/argocd/deployment/policies-app.yaml", "├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file update/argocd/deployment/argocd-openshift-gitops-patch.json", "oc apply -k update/argocd/deployment", "oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq", "{\"daemonNodeSelector\":{\"node-role.kubernetes.io/master\":\"\"}} 1", "oc get sriovoperatorconfig/default -n openshift-sriov-network-operator -ojsonpath='{.spec}' | jq", "{\"configDaemonNodeSelector\":{\"node-role.kubernetes.io/worker\":\"\"},\"disableDrain\":false,\"enableInjector\":true,\"enableOperatorWebhook\":true} 1", "spec: - fileName: PtpOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" - fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-sno-workers\" namespace: \"example-sno\" spec: bindingRules: sites: \"example-sno\" 1 mcp: \"worker\" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: \"config-policy\" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: \"4-47\" reserved: \"0-3\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker", "cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF", "nodes: - hostName: \"example-node2.example.com\" role: \"worker\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node2-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"AA:BB:CC:DD:EE:11\" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: v1 data: password: \"password\" username: \"username\" kind: Secret metadata: name: \"example-node2-bmh-secret\" namespace: example-sno type: Opaque", "oc get ppimg -n example-sno", "NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated", "oc get bmh -n example-sno", "NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1", "oc get agent -n example-sno --watch", "NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done", "oc get managedclusterinfo/example-sno -n example-sno -o jsonpath='{range .status.nodeList[*]}{.name}{\"\\t\"}{.conditions}{\"\\t\"}{.labels}{\"\\n\"}{end}'", "example-sno [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/master\":\"\",\"node-role.kubernetes.io/worker\":\"\"} example-node2 [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/worker\":\"\"}", "podman pull quay.io/openshift-kni/telco-ran-tools:latest", "podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v", "factory-precaching-cli version 20221018.120852+main.feecf17", "curl --globoff -H \"Content-Type: application/json\" -H \"Accept: application/json\" -k -X GET --user USD{username_password} https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Image\": \"http://[USDHTTPd_IP]/RHCOS-live.iso\"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Boot\":{ \"BootSourceOverrideEnabled\": \"Once\", \"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk", "wipefs -a /dev/nvme0n1", "/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa", "podman run -v /dev:/dev --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli partition \\ 1 -d /dev/nvme0n1 \\ 2 -s 250 3", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part", "gdisk -l /dev/nvme0n1", "GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data", "lsblk -f /dev/nvme0n1", "NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071", "mount /dev/nvme0n1p1 /mnt/", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1", "taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help", "oc get csv -A | grep -i advanced-cluster-management", "open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded", "oc get csv -A | grep -i multicluster-engine", "multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded", "mkdir /root/.docker", "cp config.json /root/.docker/config.json 1", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- factory-precaching-cli download \\ 1 -r 4.13.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f Summary: Release: 4.13.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83", "ls -l /mnt 1", "-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.13.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s 7", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 Summary: Release: 4.13.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.13.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --generate-imageset 8", "Generated /mnt/imageset.yaml", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.13 minVersion: 4.13.0 1 maxVersion: 4.13.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.13' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.13 packages: - name: sriov-fec 11 channels: - name: 'stable'", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.13 packages: - name: sriov-fec channels: - name: 'stable'", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.13.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --skip-imageset 8", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.13.0 --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt --img quay.io/custom/repository --du-profile -s --skip-imageset", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-5g-lab\" namespace: \"example-5g-lab\" spec: baseDomain: \"example.domain.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"img4.9.10-x86-64-appsub\" 1 sshPublicKey: \"ssh-rsa ...\" clusters: - clusterName: \"sno-worker-0\" clusterImageSetNameRef: \"eko4-img4.11.5-x86-64-appsub\" 2 clusterLabels: group-du-sno: \"\" common-411: true sites : \"example-5g-lab\" vendor: \"OpenShift\" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: \"OVNKubernetes\" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-images.service\\nBindsTo=precache-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-images.service\" }, { \"name\": \"precache-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached images in discovery stage\\nAfter=var-mnt.mount\\nBefore=agent.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ai.sh\\n#TimeoutStopSec=30\\n\\n[Install]\\nWantedBy=multi-user.target default.target\\nWantedBy=agent.service\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ai.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200\" } }, { \"overwrite\": true, \"path\": \"/usr/local/bin/agent-fix-bz1964591\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true\" } } ] } }' nodes: - hostName: \"snonode.sno-worker-0.example.domain.redhat.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"worker0-bmh-secret\" bootMACAddress: \"e4:43:4b:bd:90:46\" bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '[\"--save-partlabel\", \"data\"]' ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-ocp-images.service\\nBindsTo=precache-ocp-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-ocp-images.service\" }, { \"name\": \"precache-ocp-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached OCP images into containers storage\\nAfter=var-mnt.mount\\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ocp.sh\\nTimeoutStopSec=60\\n\\n[Install]\\nWantedBy=multi-user.target\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ocp.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200\" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: \"AA:BB:CC:11:22:33\" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"ens1f0\" macAddress: \"AA:BB:CC:11:22:33\"", "OPTIONS: -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL --save-partlabel <lx> Save partitions with this label glob --save-partindex <id> Save partitions with this number or range --insecure-ignition Allow Ignition URL without HTTPS or hash", "Generating list of pre-cached artifacts error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying next host error=failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run \"oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME\" for more information. error: error rendering new refs: render reference \"eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11\": error resolving name : failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.13.0 --acm-version 2.5.4 --mce-version 2.0.4 -f /mnt \\--img quay.io/custom/repository --du-profile -s --skip-imageset" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/clusters-at-the-network-far-edge
10.4. Restricting Domains for PAM services
10.4. Restricting Domains for PAM services Important This feature requires SSSD to be running on the system. SSSD enables you to restrict which domains can be accessed by PAM services. SSSD evaluates authentication requests from PAM services based on the user the particular PAM service is running as. Whether the PAM service can access an SSSD domain depends on whether the PAM service user is able to access the domain. An example use case is an environment where external users are allowed to authenticate to an FTP server. The FTP server is running as a separate non-privileged user that should only be able to authenticate to a selected SSSD domain, separate from internal company accounts. With this feature, the administrator can allow the FTP user to only authenticate to selected domains specified in the FTP PAM configuration file. Note This functionality is similar to legacy PAM modules, such as pam_ldap , which were able to use a separate configuration file as a parameter for a PAM module. Options to Restrict Access to Domains The following options are available to restrict access to selected domains: pam_trusted_users in /etc/sssd/sssd.conf This option accepts a list of numerical UIDs or user names representing the PAM services that are to be trusted by SSSD. The default setting is all , which means all service users are trusted and can access any domain. pam_public_domains in /etc/sssd/sssd.conf This option accepts a list of public SSSD domains. Public domains are domains accessible even for untrusted PAM service users. The option also accepts the all and none values. The default value is none , which means no domains are public and untrusted service users therefore cannot access any domain. domains for PAM configuration files This option specifies a list of domains against which a PAM service can authenticate. If you use domains without specifying any domain, the PAM service will not be able to authenticate against any domain, for example: If domains is not used in the PAM configuration file, the PAM service is able to authenticate against all domains, on the condition that the service is running under a trusted user. The domains option in the /etc/sssd/sssd.conf SSSD configuration file also specifies a list of domains to which SSSD attempts to authenticate. Note that the domains option in a PAM configuration file cannot extend the list of domains in sssd.conf , it can only restrict the sssd.conf list of domains by specifying a shorter list. Therefore, if a domain is specified in the PAM file but not in sssd.conf , the PAM service will not be able to authenticate against the domain. The default settings pam_trusted_users = all and pam_public_domains = none specify that all PAM service users are trusted and can access any domain. The domains option for PAM configuration files can be used in this situation to restrict the domains that can be accessed. If you specify a domain using domains in the PAM configuration file while sssd.conf contains pam_public_domains , it might be required to specify the domain in pam_public_domains as well. If pam_public_domains is used but does not include the required domain, the PAM service will not be able to successfully authenticate against the domain if it is running under an untrusted user. Note Domain restrictions defined in a PAM configuration file only apply to authentication actions, not to user lookups. For more information about the pam_trusted_users and pam_public_domains options, see the sssd.conf (5) man page. For more information about the domains option used in PAM configuration files, see the pam_sss (8) man page. Example 10.2. Restricting Domains for a PAM Service To restrict the domains against which a PAM service can authenticate: Make sure SSSD is configured to access the required domain or domains. The domains against which SSSD can authenticate are defined in the domains option in the /etc/sssd/sssd.conf file. Specify the domain or domains to which a PAM service will be able to authenticate. To do this, set the domains option in the PAM configuration file. For example: The PAM service is now only allowed to authenticate against domain1 .
[ "auth required pam_sss.so domains=", "[sssd] domains = domain1, domain2, domain3", "auth sufficient pam_sss.so forward_pass domains=domain1 account [default=bad success=ok user_unknown=ignore] pam_sss.so password sufficient pam_sss.so use_authtok" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/restricting_domains
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_alt-java/proc-providing-feedback-on-redhat-documentation
4.3. Confined and Unconfined Users
4.3. Confined and Unconfined Users Each Linux user is mapped to an SELinux user via SELinux policy. This allows Linux users to inherit the restrictions on SELinux users. This Linux user mapping is seen by running the semanage login -l command as the Linux root user: In Red Hat Enterprise Linux 6, Linux users are mapped to the SELinux __default__ login by default, which is mapped to the SELinux unconfined_u user. The following line defines the default mapping: The following procedure demonstrates how to add a new Linux user to the system and how to map that user to the SELinux unconfined_u user. It assumes that the Linux root user is running unconfined, as it does by default in Red Hat Enterprise Linux 6: As the Linux root user, run the useradd newuser command to create a new Linux user named newuser . As the Linux root user, run the passwd newuser command to assign a password to the Linux newuser user: Log out of your current session, and log in as the Linux newuser user. When you log in, the pam_selinux PAM module automatically maps the Linux user to an SELinux user (in this case, unconfined_u ), and sets up the resulting SELinux context. The Linux user's shell is then launched with this context. Run the id -Z command to view the context of a Linux user: Note If you no longer need the newuser user on your system, log out of the Linux newuser 's session, log in with your account, and run the userdel -r newuser command as the Linux root user. It will remove newuser along with their home directory. Confined and unconfined Linux users are subject to executable and writable memory checks, and are also restricted by MCS or MLS. If an unconfined Linux user executes an application that SELinux policy defines as one that can transition from the unconfined_t domain to its own confined domain, the unconfined Linux user is still subject to the restrictions of that confined domain. The security benefit of this is that, even though a Linux user is running unconfined, the application remains confined. Therefore, the exploitation of a flaw in the application can be limited by the policy. Similarly, we can apply these checks to confined users. However, each confined Linux user is restricted by a confined user domain against the unconfined_t domain. The SELinux policy can also define a transition from a confined user domain to its own target confined domain. In such a case, confined Linux users are subject to the restrictions of that target confined domain. The main point is that special privileges are associated with the confined users according to their role. In the table below, you can see examples of basic confined domains for Linux users in Red Hat Enterprise Linux 6: Table 4.1. SELinux User Capabilities User Role Domain X Window System su or sudo Execute in home directory and /tmp/ (default) Networking sysadm_u sysadm_r sysadm_t yes su and sudo yes yes staff_u staff_r staff_t yes only sudo yes yes user_u user_r user_t yes no yes yes guest_u guest_r guest_t no no no no xguest_u xguest_r xguest_t yes no no Firefox only Linux users in the user_t , guest_t , and xguest_t domains can only run set user ID (setuid) applications if SELinux policy permits it (for example, passwd ). These users cannot run the su and sudo setuid applications, and therefore cannot use these applications to become the Linux root user. Linux users in the sysadm_t , staff_t , user_t , and xguest_t domains can log in via the X Window System and a terminal. By default, Linux users in the guest_t and xguest_t domains cannot execute applications in their home directories or /tmp/ , preventing them from executing applications, which inherit users' permissions, in directories they have write access to. This helps prevent flawed or malicious applications from modifying users' files. By default, Linux users in the staff_t and user_t domains can execute applications in their home directories and /tmp/ . Refer to Section 6.6, "Booleans for Users Executing Applications" for information about allowing and preventing users from executing applications in their home directories and /tmp/ . The only network access Linux users in the xguest_t domain have is Firefox connecting to web pages. Alongside with the already mentioned SELinux users, there are special roles, that can be mapped to those users. These roles determine what SELinux allows the user to do: webadm_r can only administrate SELinux types related to the Apache HTTP Server. See chapter Apache HTTP Server in the Managing Confined Services guide for further information. dbadm_r can only administrate SELinux types related to the MariaDB database and the PostgreSQL database management system. See chapters MySQL and PostgreSQL in the Managing Confined Services guide for further information. logadm_r can only administrate SELinux types related to the syslog and auditlog processes. secadm_r can only administrate SELinux. auditadm_r can only administrate processes related to the audit subsystem. To list all available roles, run the following command: Note that the seinfo command is provided by the setools-console package, which is not installed by default.
[ "~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023", "__default__ unconfined_u s0-s0:c0.c1023", "~]# passwd newuser Changing password for user newuser. New UNIX password: Enter a password Retype new UNIX password: Enter the same password again passwd: all authentication tokens updated successfully.", "[newuser@localhost ~]USD id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "~]USD seinfo -r" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-targeted_policy-confined_and_unconfined_users
Appendix A. Additional procedures
Appendix A. Additional procedures A.1. Creating bootable media The P2V Client can be booted from PXE boot, a bootable USB device, or optical media. Scripts for preparing boot options are included with the rhel-6.x-p2v.iso ISO in the LiveOS directory. A.1.1. Create a P2V client boot CD The exact series of steps that produces a CD from an image file varies greatly from computer to computer, depending on the operating system and disc burning software installed. This procedure describes burning an ISO image to disk using Brasero which is included in Red Hat Enterprise Linux 6. Make sure that your disc burning software is capable of burning discs from image files. Although this is true of most disc burning software, exceptions exist. Insert a blank, writable CD into your computer's CD or DVD burner. Open the Applications menu, choose the Sound and Video sub-menu, and click Brasero Disk Burner . Click the Burn Image button. Click the Click here to select a disc image button. Browse to the rhel-6.x-p2v.iso and select it for burning. Click Burn . Your BIOS may need to be changed to allow booting from your DVD/CD-ROM drive. A.1.2. Create a bootable P2V USB media As root, mount the rhel-6.x-p2v.iso : Attach your USB device to the computer. For the livecd-iso-to-disk script to function, the USB filesystem must be formatted vfat, ext[234] or btrfs. From a terminal as root run the livecd-iso-to-disk script: When the script finishes successfully, eject the USB device. A.1.3. Create a PXE boot image As root, mount the rhel-6.x-p2v.iso From a terminal as root run the livecd-iso-to-pxeboot script: When the command successfully completes, there is a tftpboot directory in the directory from which the command was run. Rename the newly created tftpboot directory to a more descriptive name: Copy the p2vboot/ sub-directory to the /tftpboot directory: Set up your DHCP, TFTP and PXE server to serve /tftpboot/p2vboot/pxeboot.0 . Note The initrd image contains the whole CD ISO. You will notice when pxebooting that initrd can take a long time to download. This is normal behavior.
[ "mkdir /mnt/p2vmount", "mount -o loop rhel-6.x-p2v.iso /mnt/p2vmount", "bash /mnt/p2vmount/LiveOS/livecd-iso-to-disk /PATH/TO/rhel-6.x-p2v.iso /dev/YOURUSBDEVICE", "mkdir /mnt/p2vmount", "mount -o loop rhel-6.x-p2v.iso /mnt/p2vmount", "bash /mnt/p2vboot/LiveOS/livecd-iso-to-pxeboot /PATH/TO/rhel-6.x-p2v.iso", "mv tftpboot/ p2vboot/", "cp -R p2vboot/ /tftpboot/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/appendix_additional_procedures
Chapter 5. File Systems
Chapter 5. File Systems Support of Btrfs File System The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7.1. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it enables compression and integrated device management. OverlayFS The OverlayFS file system service allows the user to "overlay" one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This can be useful because it allows multiple users to share a file-system image, for example containers, or when the base image is on read-only media, for example a DVD-ROM. In Red Hat Enterprise Linux 7.1, OverlayFS is supported as a Technology Preview. There are currently two restrictions: It is recommended to use ext4 as the lower file system; the use of xfs and gfs2 file systems is not supported. SELinux is not supported, and to use OverlayFS, it is required to disable enforcing mode. Support of Parallel NFS Parallel NFS (pNFS) is a part of the NFS v4.1 standard that allows clients to access storage devices directly and in parallel. The pNFS architecture can improve the scalability and performance of NFS servers for several common workloads. pNFS defines three different storage protocols or layouts: files, objects, and blocks. The client supports the files layout, and since Red Hat Enterprise Linux 7.1, the blocks and object layouts are fully supported. Red Hat continues to work with partners and open source projects to qualify new pNFS layout types and to provide full support for more layout types in the future. For more information on pNFS, refer to http://www.pnfs.com/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-file_systems
Chapter 9. Monitoring hosts using Red Hat Insights
Chapter 9. Monitoring hosts using Red Hat Insights You can use Insights to diagnose systems and downtime related to security exploits, performance degradation, and stability failures. You can use the Insights dashboard to quickly identify key risks to stability, security, and performance. You can sort by category, view details of the impact and resolution, and then determine what systems are affected. To use Insights to monitor hosts that you manage with Satellite, you must first install Insights on your hosts and register your hosts with Insights. For new Satellite hosts, you can install and configure Insights during host registration to Satellite. For more information, see Section 4.3, "Registering hosts by using global registration" . For hosts already registered to Satellite, you can install and configure Insights on your hosts by using an Ansible role. For more information, see Section 9.3, "Deploying Red Hat Insights using the Ansible role" . Additional information To view the logs for all plugins, go to /var/log/foreman/production.log . If you have problems connecting to Insights, ensure that your certificates are up-to-date. Refresh your subscription manifest to update your certificates. You can change the default schedule for running insights-client by configuring insights-client.timer on a host. For more information, see Changing the insights-client schedule in the Client Configuration Guide for Red Hat Insights . 9.1. Access to information from Insights in Satellite You can access the additional information available for hosts from Red Hat Insights in the following places in the Satellite web UI: Navigate to Configure > Insights where the vertical ellipsis to the Remediate button provides a View in Red Hat Insights link to the general recommendations page. On each recommendation line, the vertical ellipsis provides a View in Red Hat Insights link to the recommendation rule, and, if one is available for that recommendation, a Knowledgebase article link. For additional information, navigate to Hosts > All Hosts . If the host has recommendations listed, click on the number of recommendations. On the Insights tab, the vertical ellipsis to the Remediate button provides a Go To Satellite Insights page link to information for the system, and a View in Red Hat Insights link to host details on the console. 9.2. Excluding hosts from rh-cloud and insights-client reports You can set the host_registration_insights parameter to False to omit rh-cloud and insights-client reports. Satellite will exclude the hosts from rh-cloud reports and block insights-client from uploading a report to the cloud. Use the following procedure to change the value of host_registration_insights parameter: Procedure In the Satellite web UI, navigate to Host > All Hosts . Select any host for which you want to change the value. On the Parameters tab, click on the edit button of host_registration_insights . Set the value to False . This parameter can also be set at the organization, hostgroup, subnet, and domain level. Also, it automatically prevents new reports from being uploaded as long as they are associated with the entity. If you set the parameter to false on a host that is already reported on the Red Hat Hybrid Cloud Console , it will be still removed automatically from the inventory. However, this process can take some time to complete. 9.3. Deploying Red Hat Insights using the Ansible role The RedHatInsights.insights-client Ansible role is used to automate the installation and registration of hosts with Insights. For more information about adding this role to your Satellite, see Getting Started with Ansible in Satellite in Managing configurations using Ansible integration . Procedure Add the RedHatInsights.insights-client role to the hosts. For new hosts, see Section 2.1, "Creating a host in Red Hat Satellite" . For existing hosts, see Using Ansible Roles to Automate Repetitive Tasks on Clients in Managing configurations using Ansible integration . To run the RedHatInsights.insights-client role on your host, navigate to Hosts > All Hosts and click the name of the host that you want to use. On the host details page, expand the Schedule a job dropdown menu. Click Run Ansible roles . 9.4. Configuring synchronization of Insights recommendations for hosts You can enable automatic synchronization of the recommendations from Red Hat Hybrid Cloud Console that occurs daily by default. If you leave the setting disabled, you can synchronize the recommendations manually. Procedures To get the recommendations automatically: In the Satellite web UI, navigate to Configure > Insights . Enable Sync Automatically . To get the recommendations manually: In the Satellite web UI, navigate to Configure > Insights . On the vertical ellipsis, click Sync Recommendations . 9.5. Configuring automatic removal of hosts from the Insights Inventory When hosts are removed from Satellite, they can also be removed from the inventory of Red Hat Insights, either automatically or manually. You can configure automatic removal of hosts from the Insights Inventory during Red Hat Hybrid Cloud Console synchronization with Satellite that occurs daily by default. If you leave the setting disabled, you can still remove the bulk of hosts from the Inventory manually. Prerequisites Your user account must have the permission of view_foreman_rh_cloud to view the Inventory Upload page in Satellite web UI. Procedure In the Satellite web UI, navigate to Configure > Inventory Upload . Enable the Automatic Mismatch Deletion setting. 9.6. Creating an Insights remediation plan for hosts With Satellite, you can create a Red Hat Insights remediation plan and run the plan on Satellite hosts. Procedure In the Satellite web UI, navigate to Configure > Insights . On the Red Hat Insights page, select the number of recommendations that you want to include in an Insights plan. You can only select the recommendations that have an associated playbook. Click Remediate . In the Remediation Summary window, you can select the Resolutions to apply. Use the Filter field to search for specific keywords. Click Remediate . In the Job Invocation page, do not change the contents of precompleted fields. Optional. For more advanced configuration of the Remote Execution Job, click Show Advanced Fields . Select the Type of query you require. Select the Schedule you require. Click Submit . Alternatively: In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Host details page, click Recommendations . On the Red Hat Insights page, select the number of recommendations you want to include in an Insights plan and proceed as before. In the Jobs window, you can view the progress of your plan.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/Monitoring_Hosts_Using_Red_Hat_Insights_managing-hosts
Images
Images OpenShift Container Platform 4.16 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324", "apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed", "oc edit configs.samples.operator.openshift.io/cluster", "apiVersion: samples.operator.openshift.io/v1 kind: Config", "oc tag -d <image_stream_name:tag>", "Deleted tag default/<image_stream_name:tag>.", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\" --insecure=true 1", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y", "RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y", "FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile", "FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y", "RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory", "LABEL io.openshift.tags mongodb,mongodb24,nosql", "LABEL io.openshift.wants mongodb,redis", "LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support", "LABEL io.openshift.non-scalable true", "LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "s2i create <image_name> <destination_directory>", "IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run", "podman build -t <builder_image_name>", "docker build -t <builder_image_name>", "podman run <builder_image_name> .", "docker run <builder_image_name> .", "s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_", "podman run <output_application_image_name>", "docker run <output_application_image_name>", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "oc tag <source> <destination>", "oc tag ruby:2.0 ruby:static-2.0", "oc tag --alias=true <source> <destination>", "oc delete istag/ruby:latest", "oc tag -d ruby:latest", "<image_stream_name>:<tag>", "<image_stream_name>@<id>", "openshift/ruby-20-centos7:2.0", "registry.redhat.io/rhel7:latest", "centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e", "oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b", "oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "oc get serviceaccount default -o yaml", "apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: \"2025-03-03T20:07:52Z\" name: default namespace: default resourceVersion: \"13914\" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name>", "apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name>", "apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name>", "oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso", "oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5", "<image-stream-name>@<image-id>", "origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest", "<imagestream name>:<tag>", "origin-ruby-sample:latest", "apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: \"1001\" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: \"1001\" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: \"1.0\" kind: DockerImage dockerImageMetadataVersion: \"1.0\" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "oc describe is/<image-name>", "oc describe is/python", "Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago", "oc describe istag/<image-stream>:<tag-name>", "oc describe istag/python:latest", "Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801", "oc get istag <image-stream-tag> -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"", "oc get istag busybox:latest -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"", "linux/amd64 linux/arm linux/arm64 linux/386 linux/mips64le linux/ppc64le linux/riscv64 linux/s390x", "oc tag <image-name:tag1> <image-name:tag2>", "oc tag python:3.5 python:latest", "Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.", "oc describe is/python", "Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago", "oc tag <repository/image> <image-name:tag>", "oc tag docker.io/python:3.6.0 python:3.6", "Tag python:3.6 set to docker.io/python:3.6.0.", "oc tag <image-name:tag> <image-name:latest>", "oc tag python:3.6 python:latest", "Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.", "oc tag -d <image-name:tag>", "oc tag -d python:3.6", "Deleted tag default/python:3.6", "oc tag <repository/image> <image-name:tag> --scheduled", "oc tag docker.io/python:3.6.0 python:3.6 --scheduled", "Tag python:3.6 set to import docker.io/python:3.6.0 periodically.", "oc tag <repositiory/image> <image-name:tag>", "oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson", "oc import-image <imagestreamtag> --from=<image> --confirm", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --reference-policy=local --confirm", "--- Arch: <none> Manifests: linux/amd64 sha256:6e325b86566fafd3c4683a05a219c30c421fbccbf8d87ab9d20d4ec1131c3451 linux/arm64 sha256:d8fad562ffa75b96212c4a6dc81faf327d67714ed85475bf642729703a2b5bf6 linux/ppc64le sha256:7b7e25338e40d8bdeb1b28e37fef5e64f0afd412530b257f5b02b30851f416e1 ---", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='Legacy' --confirm", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --scheduled=true", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --insecure=true", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name>", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal'", "oc set image-lookup mysql", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true", "oc set image-lookup imagestream --list", "oc set image-lookup deploy/mysql", "apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql", "oc set image-lookup deploy/mysql --enabled=false", "apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, # ]", "oc set triggers deploy/example --from-image=example:latest -c web", "apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"example:latest\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"container\\\")].image\"}]'", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.29.4 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.29.4 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.29.4 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.29.4 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.29.4 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.29.4", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# cat /etc/containers/policy.json | jq '.'", "{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }", "spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# cat etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload", "[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"", "oc edit image.config.openshift.io cluster", "spec: registrySources: blockedRegistries: - quay.io/openshift-payload", "[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf", "unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal", "apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.29.4 ip-10-0-138-148.ec2.internal Ready master 11m v1.29.4 ip-10-0-139-122.ec2.internal Ready master 11m v1.29.4 ip-10-0-147-35.ec2.internal Ready worker 7m v1.29.4 ip-10-0-153-12.ec2.internal Ready worker 7m v1.29.4 ip-10-0-154-10.ec2.internal Ready master 11m v1.29.4", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf", "oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>", "oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files", "wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml", "oc create -f <path_to_the_directory>/<file-name>.yaml", "podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7", "image:///usr/libexec/s2i", "#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc", "#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/images/index
Chapter 4. Known Issues
Chapter 4. Known Issues 4.1. Reboot requirement detection The reboot requirement detection is not reliable on RHEL 8.5 and later on platform ppc64le. A failed reboot requirement detection can lead to unnecessary reboots at the end of a playbook which calls the preconfiguration roles. Role parameters are available for avoiding reboots, and playbooks can be extended to unconditionally reboot a system. See bug 2166444 for more information. 4.2. Extended check (Assert) function Be careful when using the extended check (=assert) function of the preconfigure roles. The preconfigure roles can run in an assert mode, in which case they do not modify managed nodes but report the compliance of a node with the applicable SAP notes. When using the same control node also for modifying the system configuration, by running the preconfigure roles in normal mode, extra caution needs to be applied to ensure that a "normal" playbook is not accidentally used for checking the system configuration. It is strongly recommended to only run the roles on production systems after testing them on test and QA systems first. 4.3. DNS name resolution Role sap_general_preconfigure fails if the DNS domain is not set on the managed node. In case there is no DNS domain set on the managed node, which is typically the case on cloud systems, the role sap_general_preconfigure will fail in task Verify that the DNS domain is set . To avoid this, set the role variable sap_domain in the vars section of your playbook, in an inventory file for the managed node or run the ansible-playbook command with parameter -e "sap_domain=example.com" (replace example.com by your own DNS domain name).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/red_hat_enterprise_linux_system_roles_for_sap/known_issues
3.4. Setting Up Multipathing in the initramfs File System
3.4. Setting Up Multipathing in the initramfs File System You can set up multipathing in the initramfs file system. After configuring multipath, you can rebuild the initramfs file system with the multipath configuration files by executing the dracut command with the following options: If you run multipath from the initramfs file system and you make any changes to the multipath configuration files, you must rebuild the initramfs file system for the changes to take effect.
[ "dracut --force --add multipath --include /etc/multipath /etc/multipath" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/mp_initramfs
Chapter 4. Btrfs
Chapter 4. Btrfs The B-tree file system ( Btrfs ) is a local file system that aims to provide better performance and scalability. Btrfs was introduced in Red Hat Enterprise Linux 6 as a Technology Preview, available on AMD64 and Intel 64 architectures. The Btrfs Technology Preview ended as of Red Hat Enterprise Linux 6.6 and will not be updated in the future. Btrfs will be included in future releases of Red Hat Enterprise Linux 6, but will not be supported in any way. Btrfs Features Several utilities are built in to Btrfs to provide ease of administration for system administrators. These include: Built-in System Rollback File system snapshots make it possible to roll a system back to a prior, known-good state if something goes wrong. Built-in Compression This makes saving space easier. Checksum Functionality This improves error detection. Specific features include integrated LVM operations, such as: dynamic, online addition or removal of new storage devices internal support for RAID across the component devices the ability to use different RAID levels for meta or user data full checksum functionality for all meta and user data.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-btrfs
Chapter 7. Resource Constraints
Chapter 7. Resource Constraints You can determine the behavior of a resource in a cluster by configuring constraints for that resource. You can configure the following categories of constraints: location constraints - A location constraint determines which nodes a resource can run on. Location constraints are described in Section 7.1, "Location Constraints" . order constraints - An order constraint determines the order in which the resources run. Order constraints are described in Section 7.2, "Order Constraints" . colocation constraints - A colocation constraint determines where resources will be placed relative to other resources. Colocation constraints are described in Section 7.3, "Colocation of Resources" . As a shorthand for configuring a set of constraints that will locate a set of resources together and ensure that the resources start sequentially and stop in reverse order, Pacemaker supports the concept of resource groups. For information on resource groups, see Section 6.5, "Resource Groups" . 7.1. Location Constraints Location constraints determine which nodes a resource can run on. You can configure location constraints to determine whether a resource will prefer or avoid a specified node. In addition to location constraints, the node on which a resource runs is influenced by the resource-stickiness value for that resource, which determines to what degree a resource prefers to remain on the node where it is currently running. For information on setting the resource-stickiness value, see Section 7.1.5, "Configuring a Resource to Prefer its Current Node" . 7.1.1. Basic Location Constraints You can configure a basic location constraint to specify whether a resource prefers or avoid a node, with an optional score value to indicate the relative degree of preference for the constraint. The following command creates a location constraint for a resource to prefer the specified node or nodes. Note that it is possible to create constraints on a particular resource for more than one node with a single command. The following command creates a location constraint for a resource to avoid the specified node or nodes. Table 7.1, "Simple Location Constraint Options" summarizes the meanings of the options for configuring location constraints in their simplest form. Table 7.1. Simple Location Constraint Options Field Description rsc A resource name node A node's name score Postive integer value to indicate the preference for whether a resource should prefer or avoid a node. INFINITY is the default score value for a resource location constraint. A value of INFINITY for score in a pcs contraint location rsc prefers command indicates that the resource will prefer that node if the node is available, but does not prevent the resource from running on another node if the specified node is unavailable. A value of INFINITY for score in a pcs contraint location rsc avoids command indicates that the resource will never run on that node, even if no other node is available. This is the equivalent of setting a pcs constraint location add command with a score of -INFINITY . The following command creates a location constraint to specify that the resource Webserver prefers node node1 . As of Red Hat Enterprise Linux 7.4, pcs supports regular expressions in location constraints on the command line. These constraints apply to multiple resources based on the regular expression matching resource name. This allows you to configure multiple location contraints with a single command line. The following command creates a location constraint to specify that resources dummy0 to dummy9 prefer node1 . Since Pacemaker uses POSIX extended regular expressions as documented at http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04 , you can specify the same constraint with the following command. 7.1.2. Advanced Location Constraints When configuring a location constraint on a node, you can use the resource-discovery option of the pcs constraint location command to indicate a preference for whether Pacemaker should perform resource discovery on this node for the specified resource. Limiting resource discovery to a subset of nodes the resource is physically capable of running on can significantly boost performance when a large set of nodes is present. When pacemaker_remote is in use to expand the node count into the hundreds of nodes range, this option should be considered. The following command shows the format for specifying the resource-discovery option of the pcs constraint location command. Note that id is the constraint id. The meanings of rsc , node , and score are summarized in Table 7.1, "Simple Location Constraint Options" . In this command, a positive value for score corresponds to a basic location`constraint that configures a resource to prefer a node, while a negative value for score corresponds to a basic location`constraint that configures a resource to avoid a node. As with basic location constraints, you can use regular expressions for resources with these constraints as well. Table 7.2, "Resource Discovery Values" summarizes the meanings of the values you can specify for the resource-discovery option. Table 7.2. Resource Discovery Values Value Description always Always perform resource discovery for the specified resource on this node. This is the default resource-discovery value for a resource location constraint. never Never perform resource discovery for the specified resource on this node. exclusive Perform resource discovery for the specified resource only on this node (and other nodes similarly marked as exclusive ). Multiple location constraints using exclusive discovery for the same resource across different nodes creates a subset of nodes resource-discovery is exclusive to. If a resource is marked for exclusive discovery on one or more nodes, that resource is only allowed to be placed within that subset of nodes. Note that setting the resource-discovery option to never or exclusive allows the possibility for the resource to be active in those locations without the cluster's knowledge. This can lead to the resource being active in more than one location if the service is started outside the cluster's control (for example, by systemd or by an administrator). This can also occur if the resource-discovery property is changed while part of the cluster is down or suffering split-brain, or if the resource-discovery property is changed for a resource and node while the resource is active on that node. For this reason, using this option is appropriate only when you have more than eight nodes and there is a way to guarantee that the resource can run only in a particular location (for example, when the required software is not installed anywhere else). 7.1.3. Using Rules to Determine Resource Location For more complicated location constraints, you can use Pacemaker rules to determine a resource's location. For general information about Pacemaker rules and the properties you can set, see Chapter 11, Pacemaker Rules . Use the following command to configure a Pacemaker constraint that uses rules. If score is omitted, it defaults to INFINITY. If resource-discovery is omitted, it defaults to always . For information on the resource-discovery option, see Section 7.1.2, "Advanced Location Constraints" . As with basic location constraints, you can use regular expressions for resources with these constraints as well. When using rules to configure location constraints, the value of score can be positive or negative, with a positive value indicating "prefers" and a negative value indicating "avoids". The expression option can be one of the following where duration_options and date_spec_options are: hours, monthdays, weekdays, yeardays, months, weeks, years, weekyears, moon as described in Table 11.5, "Properties of a Date Specification" . defined|not_defined attribute attribute lt|gt|lte|gte|eq|ne [string|integer|version] value date gt|lt date date in-range date to date date in-range date to duration duration_options ... date-spec date_spec_options expression and|or expression ( expression ) The following location constraint configures an expression that is true if now is any time in the year 2018. The following command configures an expression that is true from 9 am to 5 pm, Monday through Friday. Note that the hours value of 16 matches up to 16:59:59, as the numeric value (hour) still matches. The following command configures an expression that is true when there is a full moon on Friday the thirteenth. 7.1.4. Location Constraint Strategy Using any of the location constraints described in Section 7.1.1, "Basic Location Constraints" , Section 7.1.2, "Advanced Location Constraints" , and Section 7.1.3, "Using Rules to Determine Resource Location" you can configure a general strategy for specifying which nodes a resources can run on: Opt-In Clusters - Configure a cluster in which, by default, no resource can run anywhere and then selectively enable allowed nodes for specific resources. The procedure for configuring an opt-in cluster is described in Section 7.1.4.1, "Configuring an "Opt-In" Cluster" . Opt-Out Clusters - Configure a cluster in which, by default, all resources can run anywhere and then create location constraints for resources that are not allowed to run on specific nodes. The procedure for configuring an opt-out cluster is described in Section 7.1.4.2, "Configuring an "Opt-Out" Cluster" . This is the default Pacemaker strategy. Whether you should choose to configure your cluster as an opt-in or opt-out cluster depends both on your personal preference and the make-up of your cluster. If most of your resources can run on most of the nodes, then an opt-out arrangement is likely to result in a simpler configuration. On the other hand, if most resources can only run on a small subset of nodes an opt-in configuration might be simpler. 7.1.4.1. Configuring an "Opt-In" Cluster To create an opt-in cluster, set the symmetric-cluster cluster property to false to prevent resources from running anywhere by default. Enable nodes for individual resources. The following commands configure location constraints so that the resource Webserver prefers node example-1 , the resource Database prefers node example-2 , and both resources can fail over to node example-3 if their preferred node fails. When configuring location constraints for an opt-in cluster, setting a score of zero allows a resource to run on a node without indicating any preference to prefer or avoid the node. 7.1.4.2. Configuring an "Opt-Out" Cluster To create an opt-out cluster, set the symmetric-cluster cluster property to true to allow resources to run everywhere by default. The following commands will then yield a configuration that is equivalent to the example in Section 7.1.4.1, "Configuring an "Opt-In" Cluster" . Both resources can fail over to node example-3 if their preferred node fails, since every node has an implicit score of 0. Note that it is not necessary to specify a score of INFINITY in these commands, since that is the default value for the score. 7.1.5. Configuring a Resource to Prefer its Current Node Resources have a resource-stickiness value that you can set as a meta attribute when you create the resource, as described in Section 6.4, "Resource Meta Options" . The resource-stickiness value determines how much a resource wants to remain on the node where it is currently running. Pacemaker considers the resource-stickiness value in conjunction with other settings (for example, the score values of location constraints) to determine whether to move a resource to another node or to leave it in place. By default, a resource is created with a resource-stickiness value of 0. Pacemaker's default behavior when resource-stickiness is set to 0 and there are no location constraints is to move resources so that they are evenly distributed among the cluster nodes. This may result in healthy resources moving more often than you desire. To prevent this behavior, you can set the default resource-stickiness value to 1. This default will apply to all resources in the cluster. This small value can be easily overridden by other constraints that you create, but it is enough to prevent Pacemaker from needlessly moving healthy resources around the cluster. The following command sets the default resource-stickiness value to 1. If the resource-stickiness value is set, then no resources will move to a newly-added node. If resource balancing is desired at that point, you can temporarily set the resource-stickiness value back to 0. Note that if a location constraint score is higher than the resource-stickiness value, the cluster may still move a healthy resource to the node where the location constraint points. For further information about how Pacemaker determines where to place a resource, see Section 9.6, "Utilization and Placement Strategy" .
[ "pcs constraint location rsc prefers node [= score ] [ node [= score ]]", "pcs constraint location rsc avoids node [= score ] [ node [= score ]]", "pcs constraint location Webserver prefers node1", "pcs constraint location 'regexp%dummy[0-9]' prefers node1", "pcs constraint location 'regexp%dummy[[:digit:]]' prefers node1", "pcs constraint location add id rsc node score [resource-discovery= option ]", "pcs constraint location rsc rule [resource-discovery= option ] [role=master|slave] [score= score | score-attribute= attribute ] expression", "pcs constraint location Webserver rule score=INFINITY date-spec years=2018", "pcs constraint location Webserver rule score=INFINITY date-spec hours=\"9-16\" weekdays=\"1-5\"", "pcs constraint location Webserver rule date-spec weekdays=5 monthdays=13 moon=4", "pcs property set symmetric-cluster=false", "pcs constraint location Webserver prefers example-1=200 pcs constraint location Webserver prefers example-3=0 pcs constraint location Database prefers example-2=200 pcs constraint location Database prefers example-3=0", "pcs property set symmetric-cluster=true", "pcs constraint location Webserver prefers example-1=200 pcs constraint location Webserver avoids example-2=INFINITY pcs constraint location Database avoids example-1=INFINITY pcs constraint location Database prefers example-2=200", "pcs resource defaults resource-stickiness=1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-resourceconstraints-HAAR
Data Grid Performance and Sizing Guide
Data Grid Performance and Sizing Guide Red Hat Data Grid 8.4 Plan and size Data Grid deployments Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_performance_and_sizing_guide/index
Chapter 5. Configuring memory on Compute nodes
Chapter 5. Configuring memory on Compute nodes As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances for optimal memory performance: Overallocation : Tune the virtual RAM to physical RAM allocation ratio. Swap : Tune the allocated swap size to handle memory overcommit. Huge pages : Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages). File-backed memory : Use to expand your Compute node memory capacity. SEV : Use to enable your cloud users to create instances that use memory encryption. 5.1. Configuring memory for overallocation When you use memory overcommit ( NovaRAMAllocationRatio >= 1.0), you need to deploy your overcloud with enough swap space to support the allocation ratio. Note If your NovaRAMAllocationRatio parameter is set to < 1 , follow the RHEL recommendations for swap size. For more information, see Recommended system swap space in the RHEL Managing Storage Devices guide. Prerequisites You have calculated the swap size your node requires. For more information, see Calculating swap size . Procedure Copy the /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml file to your environment file directory: Configure the swap size by adding the following parameters to your enable-swap.yaml file: Add the enable_swap.yaml environment file to the stack with your other environment files and deploy the overcloud: 5.2. Calculating reserved host memory on Compute nodes To determine the total amount of RAM to reserve for host processes, you need to allocate enough memory for each of the following: The resources that run on the host, for example, OSD consumes 3 GB of memory. The emulator overhead required to host instances. The hypervisor for each instance. After you calculate the additional demands on memory, use the following formula to help you determine the amount of memory to reserve for host processes on each node: Replace vm_no with the number of instances. Replace avg_instance_size with the average amount of memory each instance can use. Replace overhead with the hypervisor overhead required for each instance. Replace resource1 and all resources up to <resourcen> with the number of a resource type on the node. Replace resource_ram with the amount of RAM each resource of this type requires. 5.3. Calculating swap size The allocated swap size must be large enough to handle any memory overcommit. You can use the following formulas to calculate the swap size your node requires: overcommit_ratio = NovaRAMAllocationRatio - 1 Minimum swap size (MB) = (total_RAM * overcommit_ratio) + RHEL_min_swap Recommended (maximum) swap size (MB) = total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap) The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services. For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and NovaRAMAllocationRatio set to 1 : Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB For information about how to calculate the NovaReservedHostMemory value, see Calculating reserved host memory on Compute nodes . For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide. 5.4. Configuring huge pages on Compute nodes As a cloud administrator, you can configure Compute nodes to enable instances to request huge pages. Note Configuring huge pages creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Procedure Open your Compute environment file. Configure the amount of huge page memory to reserve on each NUMA node for processes that are not instances: Replace the size value for each node with the size of the allocated huge page. Set to one of the following valid values: 2048 (for 2MB) 1GB Replace the count value for each node with the number of huge pages used by OVS per NUMA node. For example, for 4096 of socket memory used by Open vSwitch, set this to 2. Configure huge pages on the Compute nodes: Note If you configure multiple huge page sizes, you must also mount the huge page folders during first boot. For more information, see Mounting multiple huge page folders during first boot . Optional: To allow instances to allocate 1GB huge pages, configure the CPU feature flags, NovaLibvirtCPUModelExtraFlags , to include pdpe1gb : Note CPU feature flags do not need to be configured to allow instances to only request 2 MB huge pages. You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation. You only need to set NovaLibvirtCPUModelExtraFlags to pdpe1gb when NovaLibvirtCPUMode is set to host-model or custom . If the host supports pdpe1gb , and host-passthrough is used as the NovaLibvirtCPUMode , then you do not need to set pdpe1gb as a NovaLibvirtCPUModelExtraFlags . The pdpe1gb flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. To mitigate for CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws . To avoid loss of performance after applying Meltdown protection, configure the CPU feature flags, NovaLibvirtCPUModelExtraFlags , to include +pcid : Tip For more information, see Reducing the performance impact of Meltdown CVE fixes for OpenStack guests with "PCID" CPU feature flag . Add NUMATopologyFilter to the NovaSchedulerEnabledFilters parameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 5.4.1. Creating a huge pages flavor for instances To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances. Prerequisites The Compute node is configured for huge pages. For more information, see Configuring huge pages on Compute nodes . Procedure Create a flavor for instances that require huge pages: To request huge pages, set the hw:mem_page_size property of the flavor to the required size: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance: The Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a NoValidHost error. 5.4.2. Mounting multiple huge page folders during first boot You can configure the Compute service (nova) to handle multiple page sizes as part of the first boot process. The first boot process adds the heat template configuration to all nodes the first time you boot the nodes. Subsequent inclusion of these templates, such as updating the overcloud stack, does not run these scripts. Procedure Create a first boot template file, hugepages.yaml , that runs a script to create the mounts for the huge page folders. You can use the OS::TripleO::MultipartMime resource type to send the configuration script: The config script in this template performs the following tasks: Filters the hosts to create the mounts for the huge page folders on, by specifying hostnames that match 'co?mp' . You can update the filter grep pattern for specific computes as required. Masks the default dev-hugepages.mount systemd unit file to enable new mounts to be created using the page size. Ensures that the folders are created first. Creates systemd mount units for each pagesize . Runs systemd daemon-reload after the first loop, to include the newly created unit files. Enables each mount for 2M and 1G pagesizes. You can update this loop to include additional pagesizes, as required. Optional: The /dev folder is automatically bind mounted to the nova_compute and nova_libvirt containers. If you have used a different destination for the huge page mounts, then you need to pass the mounts to the nova_compute and nova_libvirt containers: Register your heat template as the OS::TripleO::NodeUserData resource type in your ~/templates/firstboot.yaml environment file: Important You can only register the NodeUserData resources to one heat template for each resource. Subsequent usage overrides the heat template to use. Add your first boot environment file to the stack with your other environment files and deploy the overcloud: 5.5. Configuring Compute nodes to use file-backed memory for instances You can use file-backed memory to expand your Compute node memory capacity, by allocating files within the libvirt memory backing directory as instance memory. You can configure the amount of host disk that is available for instance memory, and the location on the disk of the instance memory files. The Compute service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory. To use file-backed memory for instances, you must enable file-backed memory on the Compute node. Limitations You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled. File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled. File-backed memory is not compatible with memory overcommit. You cannot reserve memory for host processes using NovaReservedHostMemory . When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory. Prerequisites NovaRAMAllocationRatio must be set to "1.0" on the node and any host aggregate the node is added to. NovaReservedHostMemory must be set to "0". Procedure Open your Compute environment file. Configure the amount of host disk space, in MiB, to make available for instance RAM, by adding the following parameter to your Compute environment file: Optional: To configure the directory to store the memory backing files, set the QemuMemoryBackingDir parameter in your Compute environment file. If not set, the memory backing directory defaults to /var/lib/libvirt/qemu/ram/ . Note You must locate your backing store in a directory at or above the default directory location, /var/lib/libvirt/qemu/ram/ . You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 5.5.1. Changing the memory backing directory host disk You can move the memory backing directory from the default primary disk location to an alternative disk. Procedure Create a file system on the alternative backing device. For example, enter the following command to create an ext4 filesystem on /dev/sdb : Mount the backing device. For example, enter the following command to mount /dev/sdb on the default libvirt memory backing directory: Note The mount point must match the value of the QemuMemoryBackingDir parameter. 5.6. Configuring AMD SEV Compute nodes to provide memory encryption for instances As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled. This feature is available to use from the 2nd Gen AMD EPYCTM 7002 Series ("Rome"). To enable your cloud users to create instances that use memory encryption, you must perform the following tasks: Designate the AMD SEV Compute nodes for memory encryption. Configure the Compute nodes for memory encryption. Deploy the overcloud. Create a flavor or image for launching instances with memory encryption. Tip If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . 5.6.1. Secure Encrypted Virtualization (SEV) Secure Encrypted Virtualization (SEV), provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key. SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised. For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation. Limitations of instances with memory encryption You cannot live migrate, or suspend and resume instances with memory encryption. You cannot use PCI passthrough to directly access devices on instances with memory encryption. You cannot use virtio-blk as the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0). Note You can use virtio-scsi or SATA as the boot disk, or virtio-blk for non-boot disks. The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8 . Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYCTM 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYCTM 7002 Series ("Rome") the limit is 255. Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption. You cannot use memory encryption with instances that have multiple NUMA nodes. 5.6.2. Designating AMD SEV Compute nodes for memory encryption To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new role file to configure the AMD SEV role, and configure the bare metal nodes with an AMD SEV resource class to use to tag the Compute nodes for memory encryption. Note The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file that includes the ComputeAMDSEV role, along with any other roles that you need for the overcloud. The following example generates the roles data file roles_data_amd_sev.yaml , which includes the roles Controller and ComputeAMDSEV : Open roles_data_amd_sev.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeAMDSEV Role name name: Compute name: ComputeAMDSEV description Basic Compute Node role AMD SEV Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputeamdsev-%index% deprecated_nic_config_name compute.yaml compute-amd-sev.yaml Register the AMD SEV Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide. Tag each bare metal node that you want to designate for memory encryption with a custom AMD SEV resource class: Replace <node> with the name or ID of the bare metal node. Add the ComputeAMDSEV role to your node definition file, overcloud-baremetal-deploy.yaml , and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes: 1 You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the network_config property, then the default network definitions are used. For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes . For an example node definition file, see Example node definition file . Run the provisioning command to provision the new nodes for your role: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you do not define the network definitions by using the network_config property, then the default network definitions are used. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : If you did not run the provisioning command with the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files: Replace <amd_sev_net_top> with the name of the file that contains the network topology of the ComputeAMDSEV role, for example, compute.yaml to use the default network topology. 5.6.3. Configuring AMD SEV Compute nodes for memory encryption To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware. Note From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts . The number of devices that can attach to a PCIe port is fewer than instances running on versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware . Prerequisites Your deployment must include a Compute node that runs on AMD hardware capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable: Procedure Open your Compute environment file. Optional: Add the following configuration to your Compute environment file to specify the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently: Note The default value of the libvirt/num_memory_encrypted_guests parameter is none . If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch. Optional: To specify that all x86_64 images use the q35 machine type by default, add the following configuration to your Compute environment file: If you specify this parameter value, you do not need to set the hw_machine_type property to q35 on every AMD SEV instance image. To ensure that the AMD SEV Compute nodes reserve enough memory for host-level services to function, add 16MB for each potential AMD SEV instance: Configure the kernel parameters for the AMD SEV Compute nodes: Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 5.6.4. Creating an image for memory encryption When the overcloud contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption. Note From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts . The number of devices that can attach to a PCIe port is fewer than instances running on versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware . Procedure Create a new image for memory encryption: Note If you use an existing image, the image must have the hw_firmware_type property set to uefi . Optional: Add the property hw_mem_encryption=True to the image to enable AMD SEV memory encryption on the image: Tip You can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption . Optional: Set the machine type to q35 , if not already set in the Compute node configuration: Optional: To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the image extra specs: Tip You can also specify this trait on the flavor. For more information, see Creating a flavor for memory encryption . 5.6.5. Creating a flavor for memory encryption When the overcloud contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption. Note An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image. Procedure Create a flavor for memory encryption: To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the flavor extra specs: 5.6.6. Launching an instance with memory encryption To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance. Procedure Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption : Log in to the instance as a cloud user. To verify that the instance uses memory encryption, enter the following command from the instance:
[ "cp /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml /home/stack/templates/enable-swap.yaml", "parameter_defaults: swap_size_megabytes: <swap size in MB> swap_path: <full path to location of swap, default: /swap>", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/enable-swap.yaml", "NovaReservedHostMemory = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourcen * resource_ram))", "parameter_defaults: ComputeParameters: NovaReservedHugePages: [\"node:0,size:1GB,count:1\",\"node:1,size:1GB,count:1\"]", "parameter_defaults: ComputeParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32\"", "parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb'", "parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb, +pcid'", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> huge_pages", "openstack flavor set huge_pages --property hw:mem_page_size=<page_size>", "openstack server create --flavor huge_pages --image <image> huge_pages_instance", "heat_template_version: <version> description: > Huge pages configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: hugepages_config} hugepages_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash hostname | grep -qiE 'co?mp' || exit 0 systemctl mask dev-hugepages.mount || true for pagesize in 2M 1G;do if ! [ -d \"/dev/hugepagesUSD{pagesize}\" ]; then mkdir -p \"/dev/hugepagesUSD{pagesize}\" cat << EOF > /etc/systemd/system/dev-hugepagesUSD{pagesize}.mount [Unit] Description=USD{pagesize} Huge Pages File System Documentation=https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems DefaultDependencies=no Before=sysinit.target ConditionPathExists=/sys/kernel/mm/hugepages ConditionCapability=CAP_SYS_ADMIN ConditionVirtualization=!private-users [Mount] What=hugetlbfs Where=/dev/hugepagesUSD{pagesize} Type=hugetlbfs Options=pagesize=USD{pagesize} [Install] WantedBy = sysinit.target EOF fi done systemctl daemon-reload for pagesize in 2M 1G;do systemctl enable --now dev-hugepagesUSD{pagesize}.mount done outputs: OS::stack_id: value: {get_resource: userdata}", "parameter_defaults NovaComputeOptVolumes: - /opt/dev:/opt/dev NovaLibvirtOptVolumes: - /opt/dev:/opt/dev", "resource_registry: OS::TripleO::NodeUserData: ./hugepages.yaml", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/firstboot.yaml", "parameter_defaults: NovaLibvirtFileBackedMemory: 102400", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "mkfs.ext4 /dev/sdb", "mount /dev/sdb /var/lib/libvirt/qemu/ram", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_amd_sev.yaml Compute:ComputeAMDSEV Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack baremetal node set --resource-class baremetal.AMD-SEV <node>", "- name: Controller count: 3 - name: Compute count: 3 - name: ComputeAMDSEV count: 1 defaults: resource_class: baremetal.AMD-SEV network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1", "(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml", "(undercloud)USD watch openstack baremetal node list", "parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<amd_sev_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2", "lscpu | grep sev", "parameter_defaults: ComputeAMDSEVExtraConfig: nova::config::nova_config: libvirt/num_memory_encrypted_guests: value: 15", "parameter_defaults: ComputeAMDSEVParameters: NovaHWMachineType: x86_64=q35", "parameter_defaults: ComputeAMDSEVParameters: NovaReservedHostMemory: <libvirt/num_memory_encrypted_guests * 16>", "parameter_defaults: ComputeAMDSEVParameters: KernelArgs: \"hugepagesz=1GB hugepages=32 default_hugepagesz=1GB mem_encrypt=on kvm_amd.sev=1\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_amd_sev.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/<compute_environment_file>.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml", "(overcloud)USD openstack image create ... --property hw_firmware_type=uefi amd-sev-image", "(overcloud)USD openstack image set --property hw_mem_encryption=True amd-sev-image", "(overcloud)USD openstack image set --property hw_machine_type=q35 amd-sev-image", "(overcloud)USD openstack image set --property trait:HW_CPU_X86_AMD_SEV=required amd-sev-image", "(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 --property hw:mem_encryption=True m1.small-amd-sev", "(overcloud)USD openstack flavor set --property trait:HW_CPU_X86_AMD_SEV=required m1.small-amd-sev", "(overcloud)USD openstack server create --flavor m1.small-amd-sev --image amd-sev-image amd-sev-instance", "dmesg | grep -i sev AMD Secure Encrypted Virtualization (SEV) active" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-memory-on-Compute-nodes
5.10. Migration
5.10. Migration The Red Hat Virtualization Manager uses migration to enforce load balancing policies for a cluster. Virtual machine migration takes place according to the load balancing policy for a cluster and current demands on hosts within a cluster. Migration can also be configured to automatically occur when a host is fenced or moved to maintenance mode. The Red Hat Virtualization Manager first migrates virtual machines with the lowest CPU utilization. This is calculated as a percentage, and does not take into account RAM usage or I/O operations, except as I/O operations affect CPU utilization. If there are more than one virtual machines with the same CPU usage, the one that will be migrated first is the first virtual machine returned by the database query run by the Red Hat Virtualization Manager to determine virtual machine CPU usage. Virtual machine migration has the following limitations by default: A bandwidth limit of 52 MiBps is imposed on each virtual machine migration. A migration will time out after 64 seconds per GB of virtual machine memory. A migration will abort if progress is stalled for 240 seconds. Concurrent outgoing migrations are limited to one per CPU core per host, or 2, whichever is smaller. See https://access.redhat.com/solutions/744423 for more details about tuning migration settings.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/migration
Chapter 1. Deploying a Red Hat Enterprise Linux 7 image as a virtual machine on Microsoft Azure
Chapter 1. Deploying a Red Hat Enterprise Linux 7 image as a virtual machine on Microsoft Azure You have a number of options for deploying a Red Hat Enterprise Linux (RHEL) 7 image on Azure. This chapter discusses your options for choosing an image and lists or refers to system requirements for your host system and virtual machine (VM). This chapter also provides procedures for creating a custom VM from an ISO image, uploading it to Azure, and launching an Azure VM instance. Important While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. See the Image Builder Guide for more information. This chapter refers to the Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for additional detail. Note For a list of Red Hat products that you can use securely on Azure, see Red Hat on Microsoft Azure . Prerequisites Sign up for a Red Hat Customer Portal account. Sign up for a Microsoft Azure account. Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems to Azure with full support from Red Hat. Additional resources Red Hat in the Public Cloud Red Hat Cloud Access Reference Guide Frequently Asked Questions and Recommended Practices for Microsoft Azure 1.1. Red Hat Enterprise Linux image options on Azure The following table lists image choices and notes the differences in the image options. Table 1.1. Image options Image option Subscriptions Sample scenario Considerations Choose to deploy a Red Hat Gold Image. Leverage your existing Red Hat subscriptions. Enable subscriptions through the Red Hat Cloud Access program , and then choose a Red Hat Gold Image on Azure. See the Red Hat Cloud Access Reference Guide for details on Gold Images and how to access them on Azure. The subscription includes the Red Hat product cost; you pay Microsoft for all other instance costs. Red Hat Gold Images are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images. Choose to deploy a custom image that you move to Azure. Leverage your existing Red Hat subscriptions. Enable subscriptions through the Red Hat Cloud Access program , upload your custom image, and attach your subscriptions. The subscription includes the Red Hat product cost; you pay Microsoft for all other instance costs. Custom images that you move to Azure are "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images. Choose to deploy an existing Azure image that includes RHEL. The Azure images include a Red Hat product. Choose a RHEL image when you create a VM using the Azure console, or choose a VM from the Azure Marketplace . You pay Microsoft hourly on a pay-as-you-go model. Such images are called "on-demand." Azure provides support for on-demand images through a support agreement. Red Hat provides updates to the images. Azure makes the updates available through the Red Hat Update Infrastructure (RHUI). Note You can create a custom image for Azure using Red Hat Image Builder. See the Image Builder Guide for more information. The remainder of this chapter includes information and procedures pertaining to Red Hat Enterprise Linux custom images. Additional resources Red Hat Gold Images on Microsoft Azure Red Hat Cloud Access program Azure Marketplace Billing options in the Azure Marketplace Red Hat Enterprise Linux Bring-Your-Own-Subscription Gold Images in Azure 1.2. Understanding base images This section includes information on using preconfigured base images and their configuration settings. 1.2.1. Using a custom base image To manually configure a VM, you start with a base (starter) VM image. Once you have created the base VM image, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image. To prepare a Hyper-V cloud image of RHEL, see Prepare a RHEL 7 virtual machine from Hyper-V manager . Additional resources Red Hat Enterprise Linux 1.2.2. Required system packages The procedures in this chapter assume you are using a host system running Red Hat Enterprise Linux. To successfully complete the procedures, your host system must have the following packages installed. Table 1.2. System packages Package Description Command qemu-kvm This package provides the user-level KVM emulator and facilitates communication between hosts and guest VMs. # yum install qemu-kvm libvirt qemu-img This package provides disk management for guest VMs. The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt This package provides the server and host-side libraries for interacting with hypervisors and host systems and the libvirtd daemon that handles the library calls, manages VMs, and controls the hypervisor. Table 1.3. Additional Virtualization Packages Package Description Command virt-install This package provides the virt-install command for creating VMs from the command line. # yum install virt-install libvirt-python virt-manager virt-install libvirt-client libvirt-python This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager This package provides the virt-manager tool, also known as Virtual Machine Manager (VMM). VMM is a graphical tool for administering VMs. It uses the libvirt-client library as the management API. libvirt-client This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command line tool to manage and control VMs and hypervisors from the command line or a special virtualization shell. Additional resources Installing Virtualization Packages Manually 1.2.3. Azure VM configuration settings Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures; refer to them if you experience any errors. Table 1.4. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your Azure VMs. dhcp The primary virtual adapter should be configured for dhcp (IPv4 only). Swap Space Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). NIC Choose virtio for the primary virtual network adapter. encryption For custom images, running RHEL 7.5 and later, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. NBDE is supported only on RHEL 7.5 and later. 1.2.4. Creating a base image from an ISO image The following procedure lists the steps and initial configuration requirements for creating a custom ISO image. Once you have configured the image, you can use the image as a template for creating additional VM instances. Procedure Download the latest Red Hat Enterprise Linux 7 Binary DVD ISO image from the Red Hat Customer Portal . Ensure that you have enabled your host machine for virtualization. See the Virtualization Getting Started Guide for information and procedures. Create and start a basic Red Hat Enterprise Linux VM. See the Getting Started with Virtualization Command-line Interface for instructions. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio . A basic command line sample follows. If you use the VMM application to create your VM, follow the procedure in Getting Started with Virtual Machine Manager , with these caveats: Do not check Immediately Start VM . Change your Memory and Storage Size to your preferred settings. Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM. Review the following additional installation selection and modifications. Select Minimal Install with the standard RHEL option. For Installation Destination , select Custom Storage Configuration . Use the following configuration information to make your selections. Verify at least 500 MB for /boot . For the file system, use xfs, ext4, or ext3 for both boot and root partitions. Remove swap space. Swap space is configured on the physical blade server in Azure by the WALinuxAgent. On the Installation Summary screen, select Network and Host Name . Switch Ethernet to On . When the install starts: Create a root password. Create an administrative user account. When installation is complete, reboot the VM and log in to the root account. Once you are logged in as root , you can configure the image. 1.3. Configuring the base image for Microsoft Azure The base image requires configuration changes to serve as your RHEL 7 VM image in Azure. The following sections provide the additional configuration changes that Azure requires. 1.3.1. Installing Hyper-V device drivers Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for the Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure VM. Use the lsinitrd | grep hv command to verify that the drivers are installed. Procedure Enter the following grep command to determine if the required Hyper-V device drivers are installed. In the example below, all required drivers are installed. If all the drivers are not installed, complete the remaining steps. Note An hv_vmbus driver may exist in the environment. Even if this driver is present, complete the following steps on your VM. Create a file named hv.conf in /etc/hv.conf.d . Add the following driver parameters to the hv.conf file. Note Note the spaces before and after the quotes, for example, add_drivers+=" hv_vmbus " . This ensures that unique drivers are loaded in the event that other Hyper-V drivers exist in the environment. Regenerate the initramfs image. Verification steps Reboot the machine. Run the lsinitrd | grep hv command to verify that the drivers are installed. 1.3.2. Making additional configuration changes The VM requires further configuration changes to operate in Azure. Perform the following procedure to make the additional changes. Procedure If necessary, power on the VM. Register the VM and enable the Red Hat Enterprise Linux 7 repository. Stopping and removing cloud-init (if present) Stop the cloud-init service. Remove the cloud-init software. Completing other VM changes Edit the /etc/ssh/sshd_config file and enable password authentication. Set a generic host name. Edit (or create) the /etc/sysconfig/network-scripts/ifcfg-eth0 file. Use only the parameters listed below. Note The ifcfg-eth0 file does not exist on the RHEL 7 DVD ISO image and must be created. Remove all persistent network device rules (if present). Set ssh to start automatically. Modify the kernel boot parameters. Add crashkernel=256M to the start of the GRUB_CMDLINE_LINUX line in the /etc/default/grub file. If crashkernel=auto is present, change it to crashkernel=256M . Add the following lines to the end of the GRUB_CMDLINE_LINUX line (if not present). Remove the following options (if present). Regenerate the grub.cfg file. Install and enable the Windows Azure Linux Agent (WALinuxAgent). Note If you get the error message No package WALinuxAgent available , install the rhel-7-server-extras-rpms repository. Run the # subscription-manager repos --enable=rhel-7-server-extras-rpms command before trying the installation again. Edit the following lines in the /etc/waagent.conf file to configure swap space for provisioned VMs. Set swap space for whatever is appropriate for your provisioned VMs. Preparing to provision Unregister the VM from Red Hat Subscription Manager. Prepare the VM for Azure provisioning by cleaning up the existing provisioning details. Azure reprovisions the VM in Azure. This command generates data loss warnings, which are expected. Clean the shell history and shut down the VM. 1.4. Converting the image to a fixed VHD format All Microsoft Azure VM images must be in a fixed VHD format. The image must be aligned on a 1 MB boundary before it is converted to VHD. This section describes how to convert the image from qcow2 to a fixed VHD format and align the image, if necessary. Once you have converted the image, you can upload it to Azure. Procedure Convert the image from qcow2 to raw format. Create a shell script using the contents below. Run the script. This example uses the name align.sh . If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step. If a value displays, your image is not aligned. Resize the image using the procedures in the Aligning the image section before proceeding to the step. Use the following command to convert the file to a fixed VHD format. The sample uses qemu-img version 2.12.0. Once converted, the VHD file is ready to upload to Azure. Aligning the image Complete the following steps only if the raw file is not aligned. Resize the raw file using the rounded value displayed when you ran the verification script. Convert the raw image file to a VHD format. The sample uses qemu-img version 2.12.0. Once converted, the VHD file is ready to upload to Azure. 1.5. Installing the Azure CLI Complete the following steps to install the Azure command line interface (Azure CLI 2.1) on your host machine. Azure CLI 2.1 is a Python-based utility that creates and manages VMs in Azure. Prerequisites You need to have an account with Microsoft Azure before you can use the Azure CLI. The Azure CLI installation requires Python 3.x. Procedure Import the Microsoft repository key. Create a local Azure CLI repository entry. Update the yum package index. Check your Python version ( python --version ) and install Python 3.x, if necessary. Install the Azure CLI. Run the Azure CLI. Additional resources Azure CLI Azure CLI command reference 1.6. Creating resources in Azure Complete the following procedure to create the Azure resources that you need before you can upload the VHD file and create the Azure image. Procedure Enter the following command to authenticate your system with Azure and log in. Note If a browser is available in your environment, the CLI opens your browser to the Azure sign-in page. See Sign in with Azure CLI for more information and options. Create a resource group in an Azure region. Example: Create a storage account. See SKU Types for more information about valid SKU values. Example: Get the storage account connection string. Example: Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account. Example: Create the storage container. Example: Create a virtual network. Example: Additional resources Azure geographies Sign in with Azure CLI Azure Managed Disks Overview SKU Types 1.7. Uploading and creating an Azure image Complete the following steps to upload the VHD file to your container and create an Azure custom image. Note The exported storage connection string does not persist after a system reboot. If any of commands in the following steps fail, export the connection string again. Procedure Upload the VHD file to the storage container; it may take several minutes. To get a list of storage containers, enter the az storage container list command. Example: Get the URL for the uploaded VHD file to use in the following step. Example: Create the Azure custom image. Note The default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option --hyper-v-generation V2 . Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information on generation 2 VMs. The command may return the error "Only blobs formatted as VHDs can be imported ." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to VHD . Example: 1.8. Creating and starting the VM in Azure The following steps provide the minimum command options to create a managed-disk Azure VM from the image. See az vm create for additional options. Procedure Enter the following command to create the VM. Note The option --generate-ssh-keys creates a private/public key pair. Private and public key files are created in ~/.ssh on your system. The public key is added to the authorized_keys file on the VM for the user specified by the --admin-username option. See Other authentication methods for additional information. Example: Start an SSH session and log in to the VM. If you see a user prompt, you have successfully deployed your Azure VM. You can now go to the Azure Portal and check the audit logs and properties of your resources. You can manage your VMs directly in this portal. If you are managing multiple VMs, you should use the Azure CLI. The Azure CLI provides a powerful interface to your resources in Azure. Enter the az --help command in the CLI or see the Azure CLI command reference to learn more about the commands you use to manage your VMs in Microsoft Azure. 1.9. Other authentication methods While recommended for increased security, using the Azure-generated key pair is not required. The following examples show two methods for SSH authentication. Example 1: These command options provision a new VM without generating a public key file. They allow SSH authentication using a password. USD az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --authentication-type password --admin-username <administrator-name> --admin-password <ssh-password> --image <path-to-image> USD ssh <admin-username>@<public-ip-address> Example 2: These command options provision a new Azure VM and allow SSH authentication using an existing public key file. USD az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --ssh-key-value <path-to-existing-ssh-key> --image <path-to-image> USD ssh -i <path-to-existing-ssh-key> <admin-username>@<public-ip-address> 1.10. Attaching Red Hat subscriptions Complete the following steps to attach the subscriptions you previously enabled through the Red Hat Cloud Access program. Prerequisites You must have enabled your subscriptions. Procedure Register your system. Attach your subscriptions. You can use an activation key to attach subscriptions. Refer to Creating Red Hat Customer Portal Activation Keys . Alternatively, you can manually attach a subscription using the ID of the subscription pool (Pool ID). Refer to Attaching and Removing Subscriptions Through the Command Line . Additional resources Creating Red Hat Customer Portal Activation Keys Attaching and Removing Subscriptions Through the Command Line Using and Configuring Red Hat Subscription Manager
[ "virt-install --name isotest --memory 2048 --vcpus 2 --disk size=8,bus=virtio --location rhel-7.0-x86_64-dvd.iso --os-variant=rhel7.0", "lsinitrd | grep hv", "lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz", "add_drivers+=\" hv_vmbus \" add_drivers+=\" hv_netvsc \" add_drivers+=\" hv_storvsc \"", "dracut -f -v --regenerate-all", "subscription-manager register --auto-attach", "systemctl stop cloud-init", "yum remove cloud-init", "PasswordAuthentication yes", "hostnamectl set-hostname localhost.localdomain", "DEVICE=\"eth0\" ONBOOT=\"yes\" BOOTPROTO=\"dhcp\" TYPE=\"Ethernet\" USERCTL=\"yes\" PEERDNS=\"yes\" IPV6INIT=\"no\"", "rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules rm -f /etc/udev/rules.d/80-net-name-slot-rules", "systemctl enable sshd systemctl is-enabled sshd", "earlyprintk=ttyS0 console=ttyS0 rootdelay=300", "rhgb quiet", "grub2-mkconfig -o /boot/grub2/grub.cfg", "yum install WALinuxAgent -y systemctl enable waagent", "Provisioning.DeleteRootPassword=n ResourceDisk.Filesystem=ext4 ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2048", "subscription-manager unregister", "waagent -force -deprovision", "export HISTSIZE=0 poweroff", "qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw", "#!/bin/bash MB=USD((1024 * 1024)) size=USD(qemu-img info -f raw --output json \"USD1\" | gawk 'match(USD0, /\"virtual-size\": ([0-9]+),/, val) {print val[1]}') rounded_size=USD(((USDsize/USDMB + 1) * USDMB)) if [ USD((USDsize % USDMB)) -eq 0 ] then echo \"Your image is already aligned. You do not need to resize.\" exit 1 fi echo \"rounded size = USDrounded_size\" export rounded_size", "sh align.sh <image-xxx>.raw", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd", "qemu-img resize -f raw <image-xxx>.raw <rounded-value>", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd", "sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc", "sudo sh -c 'echo -e \"[azure-cli]\\nname=Azure CLI\\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\\nenabled=1\\ngpgcheck=1\\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc\" > /etc/yum.repos.d/azure-cli.repo'", "yum check-update", "sudo yum install python3", "sudo yum install -y azure-cli", "az", "az login", "az group create --name <resource-group> --location <azure-region>", "az group create --name azrhelclirsgrp --location southcentralus { \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp\", \"location\": \"southcentralus\", \"managedBy\": null, \"name\": \"azrhelclirsgrp\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": null }", "az storage account create -l <azure-region> -n <storage-account-name> -g <resource-group> --sku <sku_type>", "az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS { \"accessTier\": null, \"creationTime\": \"2017-04-05T19:10:29.855470+00:00\", \"customDomain\": null, \"encryption\": null, \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact\", \"kind\": \"StorageV2\", \"lastGeoFailoverTime\": null, \"location\": \"southcentralus\", \"name\": \"azrhelclistact\", \"primaryEndpoints\": { \"blob\": \"https://azrhelclistact.blob.core.windows.net/\", \"file\": \"https://azrhelclistact.file.core.windows.net/\", \"queue\": \"https://azrhelclistact.queue.core.windows.net/\", \"table\": \"https://azrhelclistact.table.core.windows.net/\" }, \"primaryLocation\": \"southcentralus\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"secondaryEndpoints\": null, \"secondaryLocation\": null, \"sku\": { \"name\": \"Standard_LRS\", \"tier\": \"Standard\" }, \"statusOfPrimary\": \"available\", \"statusOfSecondary\": null, \"tags\": {}, \"type\": \"Microsoft.Storage/storageAccounts\" }", "az storage account show-connection-string -n <storage-account-name> -g <resource-group>", "[clouduser@localhost]USD az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { \"connectionString\": \"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\" }", "export AZURE_STORAGE_CONNECTION_STRING=\"<storage-connection-string>\"", "[clouduser@localhost]USD export AZURE_STORAGE_CONNECTION_STRING=\"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\"", "az storage container create -n <container-name>", "[clouduser@localhost]USD az storage container create -n azrhelclistcont { \"created\": true }", "az network vnet create -g <resource group> --name <vnet-name> --subnet-name <subnet-name>", "[clouduser@localhost]USD az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { \"newVNet\": { \"addressSpace\": { \"addressPrefixes\": [ \"10.0.0.0/16\" ] }, \"dhcpOptions\": { \"dnsServers\": [] }, \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1\", \"location\": \"southcentralus\", \"name\": \"azrhelclivnet1\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceGuid\": \"0f25efee-e2a6-4abe-a4e9-817061ee1e79\", \"subnets\": [ { \"addressPrefix\": \"10.0.0.0/24\", \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1\", \"ipConfigurations\": null, \"name\": \"azrhelclisubnet1\", \"networkSecurityGroup\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceNavigationLinks\": null, \"routeTable\": null } ], \"tags\": {}, \"type\": \"Microsoft.Network/virtualNetworks\", \"virtualNetworkPeerings\": null } }", "az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd", "az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-7.vhd --name rhel-image-7.vhd Percent complete: %100.0", "az storage blob url -c <container-name> -n <image-name>.vhd", "az storage blob url -c azrhelclistcont -n rhel-image-7.vhd \"https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-7.vhd\"", "az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux", "az image create -n rhel7 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-7.vhd --os-type linux", "az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --generate-ssh-keys --image <path-to-image>", "[clouduser@localhost]USD az vm create -g azrhelclirsgrp2 -l southcentralus -n rhel-azure-vm-1 --vnet-name azrhelclivnet1 --subnet azrhelclisubnet1 --size Standard_A2 --os-disk-name vm-1-osdisk --admin-username clouduser --generate-ssh-keys --image rhel7 { \"fqdns\": \"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/virtualMachines/rhel-azure-vm-1\", \"location\": \"southcentralus\", \"macAddress\": \"\", \"powerState\": \"VM running\", \"privateIpAddress\": \"10.0.0.4\", \"publicIpAddress\": \"<public-IP-address>\", \"resourceGroup\": \"azrhelclirsgrp2\"", "[clouduser@localhost]USD ssh -i /home/clouduser/.ssh/id_rsa clouduser@<public-IP-address>. The authenticity of host, '<public-IP-address>' can't be established. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '<public-IP-address>' (ECDSA) to the list of known hosts.", "az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --authentication-type password --admin-username <administrator-name> --admin-password <ssh-password> --image <path-to-image>", "ssh <admin-username>@<public-ip-address>", "az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --ssh-key-value <path-to-existing-ssh-key> --image <path-to-image>", "ssh -i <path-to-existing-ssh-key> <admin-username>@<public-ip-address>", "subscription-manager register --auto-attach" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content
Chapter 4. Deploying hosted control planes
Chapter 4. Deploying hosted control planes 4.1. Deploying hosted control planes on AWS A hosted cluster is an OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. To configure hosted control planes on premises, you must install multicluster engine for Kubernetes Operator in a management cluster. By deploying the HyperShift Operator on an existing managed cluster by using the hypershift-addon managed cluster add-on, you can enable that cluster as a management cluster and start to create the hosted cluster. The hypershift-addon managed cluster add-on is enabled by default for the local-cluster managed cluster. You can use the multicluster engine Operator console or the hosted control plane command-line interface (CLI), hcp , to create a hosted cluster. The hosted cluster is automatically imported as a managed cluster. However, you can disable this automatic import feature into multicluster engine Operator . 4.1.1. Preparing to deploy hosted control planes on AWS As you prepare to deploy hosted control planes on Amazon Web Services (AWS), consider the following information: Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it. Do not use clusters as a hosted cluster name. Run the management cluster and workers on the same platform for hosted control planes. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. 4.1.1.1. Prerequisites to configure a management cluster You must have the following prerequisites to configure the management cluster: You have installed the multicluster engine for Kubernetes Operator 2.5 and later on an OpenShift Container Platform cluster. The multicluster engine Operator is automatically installed when you install Red Hat Advanced Cluster Management (RHACM). The multicluster engine Operator can also be installed without RHACM as an Operator from the OpenShift Container Platform OperatorHub. You have at least one managed OpenShift Container Platform cluster for the multicluster engine Operator. The local-cluster is automatically imported in the multicluster engine Operator version 2.5 and later. You can check the status of your hub cluster by running the following command: USD oc get managedclusters local-cluster You have installed the aws command-line interface (CLI) . You have installed the hosted control plane CLI, hcp . Additional resources Configuring Ansible Automation Platform jobs to run on hosted clusters Advanced configuration Enabling the central infrastructure management service Manually enabling the hosted control planes feature Disabling the hosted control planes feature Deploying the SR-IOV Operator for hosted control planes 4.1.2. Creating the Amazon Web Services S3 bucket and S3 OIDC secret Before you can create and manage hosted clusters on Amazon Web Services (AWS), you must create the S3 bucket and S3 OIDC secret. Procedure Create an S3 bucket that has public access to host OIDC discovery documents for your clusters by running the following commands: USD aws s3api create-bucket --bucket <bucket_name> \ 1 --create-bucket-configuration LocationConstraint=<region> \ 2 --region <region> 3 1 Replace <bucket_name> with the name of the S3 bucket you are creating. 2 3 To create the bucket in a region other than the us-east-1 region, include this line and replace <region> with the region you want to use. To create a bucket in the us-east-1 region, omit this line. USD aws s3api delete-public-access-block --bucket <bucket_name> 1 1 Replace <bucket_name> with the name of the S3 bucket you are creating. USD echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<bucket_name>/*" 1 } ] }' | envsubst > policy.json 1 Replace <bucket_name> with the name of the S3 bucket you are creating. USD aws s3api put-bucket-policy --bucket <bucket_name> \ 1 --policy file://policy.json 1 Replace <bucket_name> with the name of the S3 bucket you are creating. Note If you are using a Mac computer, you must export the bucket name in order for the policy to work. Create an OIDC S3 secret named hypershift-operator-oidc-provider-s3-credentials for the HyperShift Operator. Save the secret in the local-cluster namespace. See the following table to verify that the secret contains the following fields: Table 4.1. Required fields for the AWS secret Field name Description bucket Contains an S3 bucket with public access to host OIDC discovery documents for your hosted clusters. credentials A reference to a file that contains the credentials of the default profile that can access the bucket. By default, HyperShift only uses the default profile to operate the bucket . region Specifies the region of the S3 bucket. To create an AWS secret, run the following command: USD oc create secret generic <secret_name> \ --from-file=credentials=<path>/.aws/credentials \ --from-literal=bucket=<s3_bucket> \ --from-literal=region=<region> \ -n local-cluster Note Disaster recovery backup for the secret is not automatically enabled. To add the label that enables the hypershift-operator-oidc-provider-s3-credentials secret to be backed up for disaster recovery, run the following command: USD oc label secret hypershift-operator-oidc-provider-s3-credentials \ -n local-cluster cluster.open-cluster-management.io/backup=true 4.1.3. Creating a routable public zone for hosted clusters To access applications in your hosted clusters, you must configure the routable public zone. If the public zone exists, skip this step. Otherwise, the public zone affects the existing functions. Procedure To create a routable public zone for DNS records, enter the following command: USD aws route53 create-hosted-zone \ --name <basedomain> \ 1 --caller-reference USD(whoami)-USD(date --rfc-3339=date) 1 Replace <basedomain> with your base domain, for example, www.example.com . 4.1.4. Creating an AWS IAM role and STS credentials Before creating a hosted cluster on Amazon Web Services (AWS), you must create an AWS IAM role and STS credentials. Procedure Get the Amazon Resource Name (ARN) of your user by running the following command: USD aws sts get-caller-identity --query "Arn" --output text Example output arn:aws:iam::1234567890:user/<aws_username> Use this output as the value for <arn> in the step. Create a JSON file that contains the trust relationship configuration for your role. See the following example: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "<arn>" 1 }, "Action": "sts:AssumeRole" } ] } 1 Replace <arn> with the ARN of your user that you noted in the step. Create the Identity and Access Management (IAM) role by running the following command: USD aws iam create-role \ --role-name <name> \ 1 --assume-role-policy-document file://<file_name>.json \ 2 --query "Role.Arn" 1 Replace <name> with the role name, for example, hcp-cli-role . 2 Replace <file_name> with the name of the JSON file you created in the step. Example output arn:aws:iam::820196288204:role/myrole Create a JSON file named policy.json that contains the following permission policies for your role: { "Version": "2012-10-17", "Statement": [ { "Sid": "EC2", "Effect": "Allow", "Action": [ "ec2:CreateDhcpOptions", "ec2:DeleteSubnet", "ec2:ReplaceRouteTableAssociation", "ec2:DescribeAddresses", "ec2:DescribeInstances", "ec2:DeleteVpcEndpoints", "ec2:CreateNatGateway", "ec2:CreateVpc", "ec2:DescribeDhcpOptions", "ec2:AttachInternetGateway", "ec2:DeleteVpcEndpointServiceConfigurations", "ec2:DeleteRouteTable", "ec2:AssociateRouteTable", "ec2:DescribeInternetGateways", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:RevokeSecurityGroupEgress", "ec2:ModifyVpcAttribute", "ec2:DeleteInternetGateway", "ec2:DescribeVpcEndpointConnections", "ec2:RejectVpcEndpointConnections", "ec2:DescribeRouteTables", "ec2:ReleaseAddress", "ec2:AssociateDhcpOptions", "ec2:TerminateInstances", "ec2:CreateTags", "ec2:DeleteRoute", "ec2:CreateRouteTable", "ec2:DetachInternetGateway", "ec2:DescribeVpcEndpointServiceConfigurations", "ec2:DescribeNatGateways", "ec2:DisassociateRouteTable", "ec2:AllocateAddress", "ec2:DescribeSecurityGroups", "ec2:RevokeSecurityGroupIngress", "ec2:CreateVpcEndpoint", "ec2:DescribeVpcs", "ec2:DeleteSecurityGroup", "ec2:DeleteDhcpOptions", "ec2:DeleteNatGateway", "ec2:DescribeVpcEndpoints", "ec2:DeleteVpc", "ec2:CreateSubnet", "ec2:DescribeSubnets" ], "Resource": "*" }, { "Sid": "ELB", "Effect": "Allow", "Action": [ "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DeleteTargetGroup" ], "Resource": "*" }, { "Sid": "IAMPassRole", "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:*:iam::*:role/*-worker-role", "Condition": { "ForAnyValue:StringEqualsIfExists": { "iam:PassedToService": "ec2.amazonaws.com" } } }, { "Sid": "IAM", "Effect": "Allow", "Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetRole", "iam:UpdateAssumeRolePolicy", "iam:GetInstanceProfile", "iam:TagRole", "iam:RemoveRoleFromInstanceProfile", "iam:CreateRole", "iam:DeleteRole", "iam:PutRolePolicy", "iam:AddRoleToInstanceProfile", "iam:CreateOpenIDConnectProvider", "iam:ListOpenIDConnectProviders", "iam:DeleteRolePolicy", "iam:UpdateRole", "iam:DeleteOpenIDConnectProvider", "iam:GetRolePolicy" ], "Resource": "*" }, { "Sid": "Route53", "Effect": "Allow", "Action": [ "route53:ListHostedZonesByVPC", "route53:CreateHostedZone", "route53:ListHostedZones", "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets", "route53:DeleteHostedZone", "route53:AssociateVPCWithHostedZone", "route53:ListHostedZonesByName" ], "Resource": "*" }, { "Sid": "S3", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:ListBucket", "s3:DeleteObject", "s3:DeleteBucket" ], "Resource": "*" } ] } Attach the policy.json file to your role by running the following command: USD aws iam put-role-policy \ --role-name <role_name> \ 1 --policy-name <policy_name> \ 2 --policy-document file://policy.json 3 1 Replace <role_name> with the name of your role. 2 Replace <policy_name> with your policy name. 3 The policy.json file contains the permission policies for your role. Retrieve STS credentials in a JSON file named sts-creds.json by running the following command: USD aws sts get-session-token --output json > sts-creds.json Example sts-creds.json file { "Credentials": { "AccessKeyId": "ASIA1443CE0GN2ATHWJU", "SecretAccessKey": "XFLN7cZ5AP0d66KhyI4gd8Mu0UCQEDN9cfelW1", "SessionToken": "IQoJb3JpZ2luX2VjEEAaCXVzLWVhc3QtMiJHMEUCIDyipkM7oPKBHiGeI0pMnXst1gDLfs/TvfskXseKCbshAiEAnl1l/Html7Iq9AEIqf////KQburfkq4A3TuppHMr/9j1TgCj1z83SO261bHqlJUazKoy7vBFR/a6LHt55iMBqtKPEsIWjBgj/jSdRJI3j4Gyk1//luKDytcfF/tb9YrxDTPLrACS1lqAxSIFZ82I/jDhbDs=", "Expiration": "2025-05-16T04:19:32+00:00" } } 4.1.5. Enabling AWS PrivateLink for hosted control planes To provision hosted control planes on the Amazon Web Services (AWS) with PrivateLink, enable AWS PrivateLink for hosted control planes. Procedure Create an AWS credential secret for the HyperShift Operator and name it hypershift-operator-private-link-credentials . The secret must reside in the managed cluster namespace that is the namespace of the managed cluster being used as the management cluster. If you used local-cluster , create the secret in the local-cluster namespace. See the following table to confirm that the secret contains the required fields: Table 4.2. Required fields for the AWS secret Field name Description Optional or required region Region for use with Private Link Required aws-access-key-id The credential access key id. Required aws-secret-access-key The credential access key secret. Required To create an AWS secret, run the following command: USD oc create secret generic <secret_name> \ --from-literal=aws-access-key-id=<aws_access_key_id> \ --from-literal=aws-secret-access-key=<aws_secret_access_key> \ --from-literal=region=<region> -n local-cluster Note Disaster recovery backup for the secret is not automatically enabled. Run the following command to add the label that enables the hypershift-operator-private-link-credentials secret to be backed up for disaster recovery: USD oc label secret hypershift-operator-private-link-credentials \ -n local-cluster \ cluster.open-cluster-management.io/backup="" 4.1.6. Enabling external DNS for hosted control planes on AWS The control plane and the data plane are separate in hosted control planes. You can configure DNS in two independent areas: Ingress for workloads within the hosted cluster, such as the following domain: *.apps.service-consumer-domain.com . Ingress for service endpoints within the management cluster, such as API or OAuth endpoints through the service provider domain: *.service-provider-domain.com . The input for hostedCluster.spec.dns manages the ingress for workloads within the hosted cluster. The input for hostedCluster.spec.services.servicePublishingStrategy.route.hostname manages the ingress for service endpoints within the management cluster. External DNS creates name records for hosted cluster Services that specify a publishing type of LoadBalancer or Route and provide a hostname for that publishing type. For hosted clusters with Private or PublicAndPrivate endpoint access types, only the APIServer and OAuth services support hostnames. For Private hosted clusters, the DNS record resolves to a private IP address of a Virtual Private Cloud (VPC) endpoint in your VPC. A hosted control plane exposes the following services: APIServer OIDC You can expose these services by using the servicePublishingStrategy field in the HostedCluster specification. By default, for the LoadBalancer and Route types of servicePublishingStrategy , you can publish the service in one of the following ways: By using the hostname of the load balancer that is in the status of the Service with the LoadBalancer type. By using the status.host field of the Route resource. However, when you deploy hosted control planes in a managed service context, those methods can expose the ingress subdomain of the underlying management cluster and limit options for the management cluster lifecycle and disaster recovery. When a DNS indirection is layered on the LoadBalancer and Route publishing types, a managed service operator can publish all public hosted cluster services by using a service-level domain. This architecture allows remapping on the DNS name to a new LoadBalancer or Route and does not expose the ingress domain of the management cluster. Hosted control planes uses external DNS to achieve that indirection layer. You can deploy external-dns alongside the HyperShift Operator in the hypershift namespace of the management cluster. External DNS watches for Services or Routes that have the external-dns.alpha.kubernetes.io/hostname annotation. That annotation is used to create a DNS record that points to the Service , such as a record, or the Route , such as a CNAME record. You can use external DNS on cloud environments only. For the other environments, you need to manually configure DNS and services. For more information about external DNS, see external DNS . 4.1.6.1. Prerequisites Before you can set up external DNS for hosted control planes on Amazon Web Services (AWS), you must meet the following prerequisites: You created an external public domain. You have access to the AWS Route53 Management console. You enabled AWS PrivateLink for hosted control planes. 4.1.6.2. Setting up external DNS for hosted control planes You can provision hosted control planes with external DNS or service-level DNS. Create an Amazon Web Services (AWS) credential secret for the HyperShift Operator and name it hypershift-operator-external-dns-credentials in the local-cluster namespace. See the following table to verify that the secret has the required fields: Table 4.3. Required fields for the AWS secret Field name Description Optional or required provider The DNS provider that manages the service-level DNS zone. Required domain-filter The service-level domain. Required credentials The credential file that supports all external DNS types. Optional when you use AWS keys aws-access-key-id The credential access key id. Optional when you use the AWS DNS service aws-secret-access-key The credential access key secret. Optional when you use the AWS DNS service To create an AWS secret, run the following command: USD oc create secret generic <secret_name> \ --from-literal=provider=aws \ --from-literal=domain-filter=<domain_name> \ --from-file=credentials=<path_to_aws_credentials_file> -n local-cluster Note Disaster recovery backup for the secret is not automatically enabled. To back up the secret for disaster recovery, add the hypershift-operator-external-dns-credentials by entering the following command: USD oc label secret hypershift-operator-external-dns-credentials \ -n local-cluster \ cluster.open-cluster-management.io/backup="" 4.1.6.3. Creating the public DNS hosted zone The External DNS Operator uses the public DNS hosted zone to create your public hosted cluster. You can create the public DNS hosted zone to use as the external DNS domain-filter. Complete the following steps in the AWS Route 53 management console. Procedure In the Route 53 management console, click Create hosted zone . On the Hosted zone configuration page, type a domain name, verify that Publish hosted zone is selected as the type, and click Create hosted zone . After the zone is created, on the Records tab, note the values in the Value/Route traffic to column. In the main domain, create an NS record to redirect the DNS requests to the delegated zone. In the Value field, enter the values that you noted in the step. Click Create records . Verify that the DNS hosted zone is working by creating a test entry in the new subzone and testing it with a dig command, such as in the following example: USD dig +short test.user-dest-public.aws.kerberos.com Example output 192.168.1.1 To create a hosted cluster that sets the hostname for the LoadBalancer and Route services, enter the following command: USD hcp create cluster aws --name=<hosted_cluster_name> \ --endpoint-access=PublicAndPrivate \ --external-dns-domain=<public_hosted_zone> ... 1 1 Replace <public_hosted_zone> with the public hosted zone that you created. Example services block for the hosted cluster platform: aws: endpointAccess: PublicAndPrivate ... services: - service: APIServer servicePublishingStrategy: route: hostname: api-example.service-provider-domain.com type: Route - service: OAuthServer servicePublishingStrategy: route: hostname: oauth-example.service-provider-domain.com type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route The Control Plane Operator creates the Services and Routes resources and annotates them with the external-dns.alpha.kubernetes.io/hostname annotation. For Services and Routes , the Control Plane Operator uses a value of the hostname parameter in the servicePublishingStrategy field for the service endpoints. To create the DNS records, you can use a mechanism, such as the external-dns deployment. You can configure service-level DNS indirection for public services only. You cannot set hostname for private services because they use the hypershift.local private zone. The following table shows when it is valid to set hostname for a service and endpoint combinations: Table 4.4. Service and endpoint combinations to set hostname Service Public PublicAndPrivate Private APIServer Y Y N OAuthServer Y Y N Konnectivity Y N N Ignition Y N N 4.1.6.4. Creating a hosted cluster by using the external DNS on AWS To create a hosted cluster by using the PublicAndPrivate or Public publishing strategy on Amazon Web Services (AWS), you must have the following artifacts configured in your management cluster: The public DNS hosted zone The External DNS Operator The HyperShift Operator You can deploy a hosted cluster, by using the hcp command-line interface (CLI). Procedure To access your management cluster, enter the following command: USD export KUBECONFIG=<path_to_management_cluster_kubeconfig> Verify that the External DNS Operator is running by entering the following command: USD oc get pod -n hypershift -lapp=external-dns Example output NAME READY STATUS RESTARTS AGE external-dns-7c89788c69-rn8gp 1/1 Running 0 40s To create a hosted cluster by using external DNS, enter the following command: USD hcp create cluster aws \ --role-arn <arn_role> \ 1 --instance-type <instance_type> \ 2 --region <region> \ 3 --auto-repair \ --generate-ssh \ --name <hosted_cluster_name> \ 4 --namespace clusters \ --base-domain <service_consumer_domain> \ 5 --node-pool-replicas <node_replica_count> \ 6 --pull-secret <path_to_your_pull_secret> \ 7 --release-image quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 8 --external-dns-domain=<service_provider_domain> \ 9 --endpoint-access=PublicAndPrivate 10 --sts-creds <path_to_sts_credential_file> 11 1 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . 2 Specify the instance type, for example, m6i.xlarge . 3 Specify the AWS region, for example, us-east-1 . 4 Specify your hosted cluster name, for example, my-external-aws . 5 Specify the public hosted zone that the service consumer owns, for example, service-consumer-domain.com . 6 Specify the node replica count, for example, 2 . 7 Specify the path to your pull secret file. 8 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . 9 Specify the public hosted zone that the service provider owns, for example, service-provider-domain.com . 10 Set as PublicAndPrivate . You can use external DNS with Public or PublicAndPrivate configurations only. 11 Specify the path to your AWS STS credentials file, for example, /home/user/sts-creds/sts-creds.json . 4.1.7. Creating a hosted cluster on AWS You can create a hosted cluster on Amazon Web Services (AWS) by using the hcp command-line interface (CLI). By default for hosted control planes on Amazon Web Services (AWS), you use an AMD64 hosted cluster. However, you can enable hosted control planes to run on an ARM64 hosted cluster. For more information, see "Running hosted clusters on an ARM64 architecture". For compatible combinations of node pools and hosted clusters, see the following table: Table 4.5. Compatible architectures for node pools and hosted clusters Hosted cluster Node pools AMD64 AMD64 or ARM64 ARM64 ARM64 or AMD64 Prerequisites You have set up the hosted control plane CLI, hcp . You have enabled the local-cluster managed cluster as the management cluster. You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. Procedure To create a hosted cluster on AWS, run the following command: USD hcp create cluster aws \ --name <hosted_cluster_name> \ 1 --infra-id <infra_id> \ 2 --base-domain <basedomain> \ 3 --sts-creds <path_to_sts_credential_file> \ 4 --pull-secret <path_to_pull_secret> \ 5 --region <region> \ 6 --generate-ssh \ --node-pool-replicas <node_pool_replica_count> \ 7 --namespace <hosted_cluster_namespace> \ 8 --role-arn <role_name> \ 9 --render-into <file_name>.yaml 10 1 Specify the name of your hosted cluster, for instance, example . 2 Specify your infrastructure name. You must provide the same value for <hosted_cluster_name> and <infra_id> . Otherwise the cluster might not appear correctly in the multicluster engine for Kubernetes Operator console. 3 Specify your base domain, for example, example.com . 4 Specify the path to your AWS STS credentials file, for example, /home/user/sts-creds/sts-creds.json . 5 Specify the path to your pull secret, for example, /user/name/pullsecret . 6 Specify the AWS region name, for example, us-east-1 . 7 Specify the node pool replica count, for example, 3 . 8 By default, all HostedCluster and NodePool custom resources are created in the clusters namespace. You can use the --namespace <namespace> parameter, to create the HostedCluster and NodePool custom resources in a specific namespace. 9 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . 10 If you want to indicate whether the EC2 instance runs on shared or single tenant hardware, include this field. The --render-into flag renders Kubernetes resources into the YAML file that you specify in this field. Then, continue to the step to edit the YAML file. If you included the --render-into flag in the command, edit the specified YAML file. Edit the NodePool specification in the YAML file to indicate whether the EC2 instance should run on shared or single-tenant hardware, similar to the following example: Example YAML file apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: <nodepool_name> 1 spec: platform: aws: placement: tenancy: "default" 2 1 Specify the name of the NodePool resource. 2 Specify a valid value for tenancy: "default" , "dedicated" , or "host" . Use "default" when node pool instances run on shared hardware. Use "dedicated" when each node pool instance runs on single-tenant hardware. Use "host" when node pool instances run on your pre-allocated dedicated hosts. Verification Verify the status of your hosted cluster to check that the value of AVAILABLE is True . Run the following command: USD oc get hostedclusters -n <hosted_cluster_namespace> Get a list of your node pools by running the following command: USD oc get nodepools --namespace <hosted_cluster_namespace> Additional resources Running hosted clusters on an ARM64 architecture 4.1.7.1. Accessing a hosted cluster on AWS by using the kubeadmin credentials After creating a hosted cluster on Amazon Web Services (AWS), you can access a hosted cluster by getting the kubeconfig file, access secrets, and the kubeadmin credentials. The hosted cluster namespace contains hosted cluster resources and the access secrets. The hosted control plane runs in the hosted control plane namespace. The secret name formats are as follows: The kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig . For example, clusters-hypershift-demo-admin-kubeconfig . The kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password . For example, clusters-hypershift-demo-kubeadmin-password . Note The kubeadmin password secret is Base64-encoded and the kubeconfig secret contains a Base64-encoded kubeconfig configuration. You must decode the Base64-encoded kubeconfig configuration and save it into a <hosted_cluster_name>.kubeconfig file. Procedure Use your <hosted_cluster_name>.kubeconfig file that contains the decoded kubeconfig configuration to access the hosted cluster. Enter the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes You must decode the kubeadmin password secret to log in to the API server or the console of the hosted cluster. 4.1.7.2. Accessing a hosted cluster on AWS by using the hcp CLI You can access the hosted cluster by using the hcp command-line interface (CLI). Procedure Generate the kubeconfig file by entering the following command: USD hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig After you save the kubeconfig file, access the hosted cluster by entering the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes 4.1.8. Creating a hosted cluster in multiple zones on AWS You can create a hosted cluster in multiple zones on Amazon Web Services (AWS) by using the hcp command-line interface (CLI). Prerequisites You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. Procedure Create a hosted cluster in multiple zones on AWS by running the following command: USD hcp create cluster aws \ --name <hosted_cluster_name> \ 1 --node-pool-replicas=<node_pool_replica_count> \ 2 --base-domain <basedomain> \ 3 --pull-secret <path_to_pull_secret> \ 4 --role-arn <arn_role> \ 5 --region <region> \ 6 --zones <zones> \ 7 --sts-creds <path_to_sts_credential_file> 8 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 2 . 3 Specify your base domain, for example, example.com . 4 Specify the path to your pull secret, for example, /user/name/pullsecret . 5 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . 6 Specify the AWS region name, for example, us-east-1 . 7 Specify availability zones within your AWS region, for example, us-east-1a , and us-east-1b . 8 Specify the path to your AWS STS credentials file, for example, /home/user/sts-creds/sts-creds.json . For each specified zone, the following infrastructure is created: Public subnet Private subnet NAT gateway Private route table A public route table is shared across public subnets. One NodePool resource is created for each zone. The node pool name is suffixed by the zone name. The private subnet for zone is set in spec.platform.aws.subnet.id . 4.1.8.1. Creating a hosted cluster by providing AWS STS credentials When you create a hosted cluster by using the hcp create cluster aws command, you must provide an Amazon Web Services (AWS) account credentials that have permissions to create infrastructure resources for your hosted cluster. Infrastructure resources include the following examples: Virtual Private Cloud (VPC) Subnets Network address translation (NAT) gateways You can provide the AWS credentials by using the either of the following ways: The AWS Security Token Service (STS) credentials The AWS cloud provider secret from multicluster engine Operator Procedure To create a hosted cluster on AWS by providing AWS STS credentials, enter the following command: USD hcp create cluster aws \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <node_pool_replica_count> \ 2 --base-domain <basedomain> \ 3 --pull-secret <path_to_pull_secret> \ 4 --sts-creds <path_to_sts_credential_file> \ 5 --region <region> \ 6 --role-arn <arn_role> 7 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 2 . 3 Specify your base domain, for example, example.com . 4 Specify the path to your pull secret, for example, /user/name/pullsecret . 5 Specify the path to your AWS STS credentials file, for example, /home/user/sts-creds/sts-creds.json . 6 Specify the AWS region name, for example, us-east-1 . 7 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . 4.1.9. Running hosted clusters on an ARM64 architecture By default for hosted control planes on Amazon Web Services (AWS), you use an AMD64 hosted cluster. However, you can enable hosted control planes to run on an ARM64 hosted cluster. For compatible combinations of node pools and hosted clusters, see the following table: Table 4.6. Compatible architectures for node pools and hosted clusters Hosted cluster Node pools AMD64 AMD64 or ARM64 ARM64 ARM64 or AMD64 4.1.9.1. Creating a hosted cluster on an ARM64 OpenShift Container Platform cluster You can run a hosted cluster on an ARM64 OpenShift Container Platform cluster for Amazon Web Services (AWS) by overriding the default release image with a multi-architecture release image. If you do not use a multi-architecture release image, the compute nodes in the node pool are not created and reconciliation of the node pool stops until you either use a multi-architecture release image in the hosted cluster or update the NodePool custom resource based on the release image. Prerequisites You must have an OpenShift Container Platform cluster with a 64-bit ARM infrastructure that is installed on AWS. For more information, see Create an OpenShift Container Platform Cluster: AWS (ARM) . You must create an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. For more information, see "Creating an AWS IAM role and STS credentials". Procedure Create a hosted cluster on an ARM64 OpenShift Container Platform cluster by entering the following command: USD hcp create cluster aws \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <node_pool_replica_count> \ 2 --base-domain <basedomain> \ 3 --pull-secret <path_to_pull_secret> \ 4 --sts-creds <path_to_sts_credential_file> \ 5 --region <region> \ 6 --release-image quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 7 --role-arn <role_name> 8 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 3 . 3 Specify your base domain, for example, example.com . 4 Specify the path to your pull secret, for example, /user/name/pullsecret . 5 Specify the path to your AWS STS credentials file, for example, /home/user/sts-creds/sts-creds.json . 6 Specify the AWS region name, for example, us-east-1 . 7 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OpenShift Container Platform release image digest, see "Extracting the OpenShift Container Platform release image digest". 8 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . 4.1.9.2. Creating an ARM or AMD NodePool object on AWS hosted clusters You can schedule application workloads that is the NodePool objects on 64-bit ARM and AMD from the same hosted control plane. You can define the arch field in the NodePool specification to set the required processor architecture for the NodePool object. The valid values for the arch field are as follows: arm64 amd64 Prerequisites You must have a multi-architecture image for the HostedCluster custom resource to use. You can access multi-architecture nightly images . Procedure Add an ARM or AMD NodePool object to the hosted cluster on AWS by running the following command: USD hcp create nodepool aws \ --cluster-name <hosted_cluster_name> \ 1 --name <node_pool_name> \ 2 --node-count <node_pool_replica_count> \ 3 --arch <architecture> 4 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool name. 3 Specify the node pool replica count, for example, 3 . 4 Specify the architecture type, such as arm64 or amd64 . If you do not specify a value for the --arch flag, the amd64 value is used by default. Additional resources Extracting the OpenShift Container Platform release image digest 4.1.10. Creating a private hosted cluster on AWS After you enable the local-cluster as the hosting cluster, you can deploy a hosted cluster or a private hosted cluster on Amazon Web Services (AWS). By default, hosted clusters are publicly accessible through public DNS and the default router for the management cluster. For private clusters on AWS, all communication with the hosted cluster occurs over AWS PrivateLink. Prerequisites You enabled AWS PrivateLink. For more information, see "Enabling AWS PrivateLink". You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. For more information, see "Creating an AWS IAM role and STS credentials" and "Identity and Access Management (IAM) permissions". You configured a bastion instance on AWS . Procedure Create a private hosted cluster on AWS by entering the following command: USD hcp create cluster aws \ --name <hosted_cluster_name> \ 1 --node-pool-replicas=<node_pool_replica_count> \ 2 --base-domain <basedomain> \ 3 --pull-secret <path_to_pull_secret> \ 4 --sts-creds <path_to_sts_credential_file> \ 5 --region <region> \ 6 --endpoint-access Private \ 7 --role-arn <role_name> 8 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 3 . 3 Specify your base domain, for example, example.com . 4 Specify the path to your pull secret, for example, /user/name/pullsecret . 5 Specify the path to your AWS STS credentials file, for example, /home/user/sts-creds/sts-creds.json . 6 Specify the AWS region name, for example, us-east-1 . 7 Defines whether a cluster is public or private. 8 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . For more information about ARN roles, see "Identity and Access Management (IAM) permissions". The following API endpoints for the hosted cluster are accessible through a private DNS zone: api.<hosted_cluster_name>.hypershift.local *.apps.<hosted_cluster_name>.hypershift.local 4.1.10.1. Accessing a private management cluster on AWS Additional resources You can access your private management cluster by using the command-line interface (CLI). Procedure Find the private IPs of nodes by entering the following command: USD aws ec2 describe-instances \ --filter="Name=tag:kubernetes.io/cluster/<infra_id>,Values=owned" \ | jq '.Reservations[] | .Instances[] | select(.PublicDnsName=="") \ | .PrivateIpAddress' Create a kubeconfig file for the hosted cluster that you can copy to a node by entering the following command: USD hcp create kubeconfig > <hosted_cluster_kubeconfig> To SSH into one of the nodes through the bastion, enter the following command: USD ssh -o ProxyCommand="ssh ec2-user@<bastion_ip> \ -W %h:%p" core@<node_ip> From the SSH shell, copy the kubeconfig file contents to a file on the node by entering the following command: USD mv <path_to_kubeconfig_file> <new_file_name> Export the kubeconfig file by entering the following command: USD export KUBECONFIG=<path_to_kubeconfig_file> Observe the hosted cluster status by entering the following command: USD oc get clusteroperators clusterversion 4.2. Deploying hosted control planes on bare metal You can deploy hosted control planes by configuring a cluster to function as a management cluster. The management cluster is the OpenShift Container Platform cluster where the control planes are hosted. In some contexts, the management cluster is also known as the hosting cluster. Note The management cluster is not the same thing as the managed cluster. A managed cluster is a cluster that the hub cluster manages. The hosted control planes feature is enabled by default. The multicluster engine Operator supports only the default local-cluster , which is a hub cluster that is managed, and the hub cluster as the management cluster. If you have Red Hat Advanced Cluster Management installed, you can use the managed hub cluster, also known as the local-cluster , as the management cluster. A hosted cluster is an OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command line interface, hcp , to create a hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see Disabling the automatic import of hosted clusters into multicluster engine Operator . 4.2.1. Preparing to deploy hosted control planes on bare metal As you prepare to deploy hosted control planes on bare metal, consider the following information: Run the management cluster and workers on the same platform for hosted control planes. All bare metal hosts require a manual start with a Discovery Image ISO that the central infrastructure management provides. You can start the hosts manually or through automation by using Cluster-Baremetal-Operator . After each host starts, it runs an Agent process to discover the host details and complete the installation. An Agent custom resource represents each host. When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using logical volume manager storage". 4.2.1.1. Prerequisites to configure a management cluster You need the multicluster engine for Kubernetes Operator 2.2 and later installed on an OpenShift Container Platform cluster. You can install multicluster engine Operator as an Operator from the OpenShift Container Platform OperatorHub. The multicluster engine Operator must have at least one managed OpenShift Container Platform cluster. The local-cluster is automatically imported in multicluster engine Operator 2.2 and later. For more information about the local-cluster , see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command: USD oc get managedclusters local-cluster You must add the topology.kubernetes.io/zone label to your bare-metal hosts on your management cluster. Ensure that each host has a unique value for topology.kubernetes.io/zone . Otherwise, all of the hosted control plane pods are scheduled on a single node, causing a single point of failure. To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see Enabling the central infrastructure management service . You need to install the hosted control plane command line interface. Additional resources Advanced configuration Enabling the central infrastructure management service 4.2.1.2. Bare metal firewall, port, and service requirements You must meet the firewall, port, and service requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters. Note Services run on their default ports. However, if you use the NodePort publishing strategy, services run on the port that is assigned by the NodePort service. Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address. If your hub cluster has a proxy configuration, ensure that it can reach the hosted cluster API endpoint by adding all hosted cluster API endpoints to the noProxy field on the Proxy object. For more information, see "Configuring the cluster-wide proxy". A hosted control plane exposes the following services on bare metal: APIServer The APIServer service runs on port 6443 by default and requires ingress access for communication between the control plane components. If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses. OAuthServer The OAuthServer service runs on port 443 by default when you use the route and ingress to expose the service. If you use the NodePort publishing strategy, use a firewall rule for the OAuthServer service. Konnectivity The Konnectivity service runs on port 443 by default when you use the route and ingress to expose the service. The Konnectivity agent establishes a reverse tunnel to allow the control plane to access the network for the hosted cluster. The agent uses egress to connect to the Konnectivity server. The server is exposed by using either a route on port 443 or a manually assigned NodePort . If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443. If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes. Ignition The Ignition service runs on port 443 by default when you use the route and ingress to expose the service. If you use the NodePort publishing strategy, use a firewall rule for the Ignition service. You do not need the following services on bare metal: OVNSbDb OIDC Additional resources Configuring the cluster-wide proxy 4.2.1.3. Bare metal infrastructure requirements The Agent platform does not create any infrastructure, but it does have the following requirements for infrastructure: Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OpenShift Container Platform node. DNS: The API and ingress endpoints must be routable. Additional resources Recommended etcd practices Persistent storage using LVM Storage Disabling the automatic import of hosted clusters into multicluster engine Operator Enabling or disabling the hosted control planes feature Configuring Ansible Automation Platform jobs to run on hosted clusters 4.2.2. DNS configurations on bare metal The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain> that points to destination where the API Server can be reached. The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. Example DNS configuration api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23 If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example. Example DNS configuration for an IPv6 network api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10 If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6. Example DNS configuration for a dual stack network host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9] 4.2.3. Creating a hosted cluster on bare metal When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one. As you create a hosted cluster, keep the following guidelines in mind: Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. The most common service publishing strategy is to expose services through a load balancer. That strategy is the preferred method for exposing the Kubernetes API server. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the servicePublishingStrategy information in the HostedCluster custom resource. Procedure Create the hosted control plane namespace by entering the following command: USD oc create ns <hosted_cluster_namespace>-<hosted_cluster_name> Replace <hosted_cluster_namespace> with your hosted cluster namespace name, for example, clusters . Replace <hosted_cluster_name> with your hosted cluster name. Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending PVCs. Run the following command: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ 5 --etcd-storage-class=<etcd_storage_class> \ 6 --ssh-key <path_to_ssh_public_key> \ 7 --namespace <hosted_cluster_namespace> \ 8 --control-plane-availability-policy HighlyAvailable \ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 10 --node-pool-replicas <node_pool_replica_count> 11 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the path to your pull secret, for example, /user/name/pullsecret . 3 Specify your hosted control plane namespace, for example, clusters-example . Ensure that agents are available in this namespace by using the oc get agent -n <hosted_control_plane_namespace> command. 4 Specify your base domain, for example, krnl.es . 5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster. 6 Specify the etcd storage class name, for example, lvm-storageclass . 7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 8 Specify your hosted cluster namespace. 9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable . The default value is HighlyAvailable . 10 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . 11 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. After a few moments, verify that your hosted control plane pods are up and running by entering the following command: USD oc -n <hosted_control_plane_namespace> get pods Example output NAME READY STATUS RESTARTS AGE capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s Additional resources Manually importing a hosted cluster 4.2.3.1. Creating a hosted cluster on bare metal by using the console To create a hosted cluster by using the console, complete the following steps. Procedure Open the OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see Accessing the web console . In the console header, ensure that All Clusters is selected. Click Infrastructure Clusters . Click Create cluster Host inventory Hosted control plane . The Create cluster page is displayed. On the Create cluster page, follow the prompts to enter details about the cluster, node pools, networking, and automation. Note As you enter details about the cluster, you might find the following tips useful: If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment . On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated. On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace. On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.<hosted_cluster_name>.<base_domain> setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods. Review your entries and click Create . The Hosted cluster view is displayed. Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, then click the cluster name. Wait until the control plane components are ready. This process can take a few minutes. To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster. steps To access the web console, see Accessing the web console . 4.2.3.2. Creating a hosted cluster on bare metal by using a mirror registry You can use a mirror registry to create a hosted cluster on bare metal by specifying the --image-content-sources flag in the hcp create cluster command. Procedure Create a YAML file to define Image Content Source Policies (ICSP). See the following example: - mirrors: - brew.registry.redhat.io source: registry.redhat.io - mirrors: - brew.registry.redhat.io source: registry.stage.redhat.io - mirrors: - brew.registry.redhat.io source: registry-proxy.engineering.redhat.com Save the file as icsp.yaml . This file contains your mirror registries. To create a hosted cluster by using your mirror registries, run the following command: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ 5 --image-content-sources icsp.yaml \ 6 --ssh-key <path_to_ssh_key> \ 7 --namespace <hosted_cluster_namespace> \ 8 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> 9 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the path to your pull secret, for example, /user/name/pullsecret . 3 Specify your hosted control plane namespace, for example, clusters-example . Ensure that agents are available in this namespace by using the oc get agent -n <hosted-control-plane-namespace> command. 4 Specify your base domain, for example, krnl.es . 5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster. 6 Specify the icsp.yaml file that defines ICSP and your mirror registries. 7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 8 Specify your hosted cluster namespace. 9 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . steps To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment . To access a hosted cluster, see Accessing the hosted cluster . To add hosts to the host inventory by using the Discovery Image, see Adding hosts to the host inventory by using the Discovery Image . To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . 4.2.4. Verifying hosted cluster creation After the deployment process is complete, you can verify that the hosted cluster was created successfully. Follow these steps a few minutes after you create the hosted cluster. Procedure Obtain the kubeconfig for your new hosted cluster by entering the extract command: USD oc extract -n <hosted-control-plane-namespace> secret/admin-kubeconfig \ --to=- > kubeconfig-<hosted-cluster-name> Use the kubeconfig to view the cluster Operators of the hosted cluster. Enter the following command: USD oc get co --kubeconfig=kubeconfig-<hosted-cluster-name> Example output You can also view the running pods on your hosted cluster by entering the following command: USD oc get pods -A --kubeconfig=kubeconfig-<hosted-cluster-name> Example output 4.3. Deploying hosted control planes on OpenShift Virtualization With hosted control planes and OpenShift Virtualization, you can create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. Hosted control planes on OpenShift Virtualization provides several benefits: Enhances resource usage by packing hosted control planes and hosted clusters in the same underlying bare metal infrastructure Separates hosted control planes and hosted clusters to provide strong isolation Reduces cluster provision time by eliminating the bare metal node bootstrapping process Manages many releases under the same base OpenShift Container Platform cluster The hosted control planes feature is enabled by default. You can use the hosted control plane command line interface, hcp , to create an OpenShift Container Platform hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator". Additional resources Disabling the automatic import of hosted clusters into multicluster engine Operator Enabling or disabling the hosted control planes feature Configuring Ansible Automation Platform jobs to run on hosted clusters 4.3.1. Requirements to deploy hosted control planes on OpenShift Virtualization As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information: Run the management cluster on bare metal. Each hosted cluster must have a cluster-wide unique name. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage". Additional resources Recommended etcd practices Persistent storage using Logical Volume Manager Storage 4.3.1.1. Prerequisites You must meet the following prerequisites to create an OpenShift Container Platform cluster on OpenShift Virtualization: You have administrator access to an OpenShift Container Platform cluster, version 4.14 or later, specified in the KUBECONFIG environment variable. The OpenShift Container Platform management cluster has wildcard DNS routes enabled, as shown in the following DNS: USD oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' The OpenShift Container Platform management cluster has OpenShift Virtualization, version 4.14 or later, installed on it. For more information, see "Installing OpenShift Virtualization using the web console". The OpenShift Container Platform management cluster is on-premise bare metal. The OpenShift Container Platform management cluster is configured with OVNKubernetes as the default pod network CNI. The OpenShift Container Platform management cluster has a default storage class. For more information, see "Postinstallation storage configuration". The following example shows how to set a default storage class: USD oc patch storageclass ocs-storagecluster-ceph-rbd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' You have a valid pull secret file for the quay.io/openshift-release-dev repository. For more information, see "Install OpenShift on any x86_64 platform with user-provisioned infrastructure". You have installed the hosted control plane command line interface. You have configured a load balancer. For more information, see "Configuring MetalLB". For optimal network performance, you are using a network maximum transmission unit (MTU) of 9000 or greater on the OpenShift Container Platform cluster that hosts the KubeVirt virtual machines. If you use a lower MTU setting, network latency and the throughput of the hosted pods are affected. Enable multiqueue on node pools only when the MTU is 9000 or greater. The multicluster engine Operator has at least one managed OpenShift Container Platform cluster. The local-cluster is automatically imported. For more information about the local-cluster , see "Advanced configuration" in the multicluster engine Operator documentation. You can check the status of your hub cluster by running the following command: USD oc get managedclusters local-cluster On the OpenShift Container Platform cluster that hosts the OpenShift Virtualization virtual machines, you are using a ReadWriteMany (RWX) storage class so that live migration can be enabled. Additional resources Installing OpenShift Virtualization using the web console Postinstallation storage configuration Install OpenShift on any x86_64 platform with user-provisioned infrastructure Configuring MetalLB Advanced configuration 4.3.1.2. Firewall and port requirements Ensure that you meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters: The kube-apiserver service runs on port 6443 by default and requires ingress access for communication between the control plane components. If you use the NodePort publishing strategy, ensure that the node port that is assigned to the kube-apiserver service is exposed. If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses. If you use the NodePort publishing strategy, use a firewall rule for the ignition-server and Oauth-server settings. The konnectivity agent, which establishes a reverse tunnel to allow bi-directional communication on the hosted cluster, requires egress access to the cluster API server address on port 6443. With that egress access, the agent can reach the kube-apiserver service. If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443. If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes. If you change the default port of 6443, adjust the rules to reflect that change. Ensure that you open any ports that are required by the workloads that run in the clusters. Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address. 4.3.2. Live migration for compute nodes While the management cluster for hosted cluster virtual machines (VMs) is undergoing updates or maintenance, the hosted cluster VMs can be automatically live migrated to prevent disrupting hosted cluster workloads. As a result, the management cluster can be updated without affecting the availability and operation of the KubeVirt platform hosted clusters. Important The live migration of KubeVirt VMs is enabled by default provided that the VMs use ReadWriteMany (RWX) storage for both the root volume and the storage classes that are mapped to the kubevirt-csi CSI provider. You can verify that the VMs in a node pool are capable of live migration by checking the KubeVirtNodesLiveMigratable condition in the status section of a NodePool object. In the following example, the VMs cannot be live migrated because RWX storage is not used. Example configuration where VMs cannot be live migrated - lastTransitionTime: "2024-10-08T15:38:19Z" message: | 3 of 3 machines are not live migratable Machine user-np-ngst4-gw2hz: DisksNotLiveMigratable: user-np-ngst4-gw2hz is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-gw2hz-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode) Machine user-np-ngst4-npq7x: DisksNotLiveMigratable: user-np-ngst4-npq7x is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-npq7x-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode) Machine user-np-ngst4-q5nkb: DisksNotLiveMigratable: user-np-ngst4-q5nkb is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-q5nkb-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode) observedGeneration: 1 reason: DisksNotLiveMigratable status: "False" type: KubeVirtNodesLiveMigratable In the example, the VMs meet the requirements to be live migrated. Example configuration where VMs can be live migrated - lastTransitionTime: "2024-10-08T15:38:19Z" message: "All is well" observedGeneration: 1 reason: AsExpected status: "True" type: KubeVirtNodesLiveMigratable While live migration can protect VMs from disruption in normal circumstances, events such as infrastructure node failure can result in a hard restart of any VMs that are hosted on the failed node. For live migration to be successful, the source node that a VM is hosted on must be working correctly. When the VMs in a node pool cannot be live migrated, workload disruption might occur on the hosted cluster during maintenance on the management cluster. By default, the hosted control planes controllers try to drain the workloads that are hosted on KubeVirt VMs that cannot be live migrated before the VMs are stopped. Draining the hosted cluster nodes before stopping the VMs allows pod disruption budgets to protect workload availability within the hosted cluster. 4.3.3. Creating a hosted cluster with the KubeVirt platform With OpenShift Container Platform 4.14 and later, you can create a cluster with KubeVirt, to include creating with an external infrastructure. 4.3.3.1. Creating a hosted cluster with the KubeVirt platform by using the CLI To create a hosted cluster, you can use the hosted control plane command-line interface, hcp . Procedure Create a hosted cluster with the KubeVirt platform by entering the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <node_pool_replica_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --etcd-storage-class=<etcd_storage_class> 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the etcd storage class name, for example, lvm-storageclass . Note You can use the --release-image flag to set up the hosted cluster with a specific OpenShift Container Platform release. A default node pool is created for the cluster with two virtual machine worker replicas according to the --node-pool-replicas flag. After a few moments, verify that the hosted control plane pods are running by entering the following command: USD oc -n clusters-<hosted-cluster-name> get pods Example output NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned. To check the status of the hosted cluster, see the corresponding HostedCluster resource by entering the following command: USD oc get --namespace clusters hostedclusters See the following example output, which illustrates a fully provisioned HostedCluster object: Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. 4.3.3.2. Creating a hosted cluster with the KubeVirt platform by using external infrastructure By default, the HyperShift Operator hosts both the control plane pods of the hosted cluster and the KubeVirt worker VMs within the same cluster. With the external infrastructure feature, you can place the worker node VMs on a separate cluster from the control plane pods. The management cluster is the OpenShift Container Platform cluster that runs the HyperShift Operator and hosts the control plane pods for a hosted cluster. The infrastructure cluster is the OpenShift Container Platform cluster that runs the KubeVirt worker VMs for a hosted cluster. By default, the management cluster also acts as the infrastructure cluster that hosts VMs. However, for external infrastructure, the management and infrastructure clusters are different. Prerequisites You must have a namespace on the external infrastructure cluster for the KubeVirt nodes to be hosted in. You must have a kubeconfig file for the external infrastructure cluster. Procedure You can create a hosted cluster by using the hcp command line interface. To place the KubeVirt worker VMs on the infrastructure cluster, use the --infra-kubeconfig-file and --infra-namespace arguments, as shown in the following example: USD hcp create cluster kubevirt \ --name <hosted-cluster-name> \ 1 --node-pool-replicas <worker-count> \ 2 --pull-secret <path-to-pull-secret> \ 3 --memory <value-for-memory> \ 4 --cores <value-for-cpu> \ 5 --infra-namespace=<hosted-cluster-namespace>-<hosted-cluster-name> \ 6 --infra-kubeconfig-file=<path-to-external-infra-kubeconfig> 7 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the infrastructure namespace, for example, clusters-example . 7 Specify the path to your kubeconfig file for the infrastructure cluster, for example, /user/name/external-infra-kubeconfig . After you enter that command, the control plane pods are hosted on the management cluster that the HyperShift Operator runs on, and the KubeVirt VMs are hosted on a separate infrastructure cluster. 4.3.3.3. Creating a hosted cluster by using the console To create a hosted cluster with the KubeVirt platform by using the console, complete the following steps. Procedure Open the OpenShift Container Platform web console and log in by entering your administrator credentials. In the console header, ensure that All Clusters is selected. Click Infrastructure > Clusters . Click Create cluster > Red Hat OpenShift Virtualization > Hosted . On the Create cluster page, follow the prompts to enter details about the cluster and node pools. Note If you want to use predefined values to automatically populate fields in the console, you can create a OpenShift Virtualization credential. For more information, see Creating a credential for an on-premises environment . On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a OpenShift Virtualization credential, the pull secret is automatically populated. Review your entries and click Create . The Hosted cluster view is displayed. Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name. Wait until the control plane components are ready. This process can take a few minutes. To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster. Additional resources To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment . To access the hosted cluster, see Accessing the hosted cluster . 4.3.4. Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on. For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry: *.apps.mgmt-cluster.example.com As a result, a KubeVirt hosted cluster that is named guest and that runs on that underlying OpenShift Container Platform cluster has the following default ingress: *.apps.guest.apps.mgmt-cluster.example.com Procedure For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes. You can configure this behavior by entering the following command: USD oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' Note When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior. 4.3.5. Customizing ingress and DNS behavior If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration. 4.3.5.1. Deploying a hosted cluster that specifies the base domain To create a hosted cluster that specifies a base domain, complete the following steps. Procedure Enter the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --base-domain <basedomain> 6 1 Specify the name of your hosted cluster. 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the base domain, for example, hypershift.lab . As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example, .apps.example.hypershift.lab . The hosted cluster remains in Partial status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer. View the status of your hosted cluster by entering the following command: USD oc get --namespace clusters hostedclusters Example output NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available Access the cluster by entering the following commands: USD hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. steps To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS". Note If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB". 4.3.5.2. Setting up the load balancer Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address. Procedure A NodePort service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports. Get the HTTP node port by entering the following command: USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}' Note the HTTP node port value to use in the step. Get the HTTPS node port by entering the following command: USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}' Note the HTTPS node port value to use in the step. Create the load balancer service by entering the following command: oc apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer 1 Specify the HTTPS node port value that you noted in the step. 2 Specify the HTTP node port value that you noted in the step. 4.3.5.3. Setting up a wildcard DNS Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service. Procedure Get the external IP address by entering the following command: USD oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}' Example output 192.168.20.30 Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry: *.apps.<hosted_cluster_name\>.<base_domain\>. The DNS entry must be able to route inside and outside of the cluster. DNS resolutions example dig +short test.apps.example.hypershift.lab 192.168.20.30 Check that hosted cluster status has moved from Partial to Completed by entering the following command: USD oc get --namespace clusters hostedclusters Example output NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. 4.3.6. Configuring MetalLB You must install the MetalLB Operator before you configure MetalLB. Procedure Complete the following steps to configure MetalLB on your hosted cluster: Create a MetalLB resource by saving the following sample YAML content in the configure-metallb.yaml file: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system Apply the YAML content by entering the following command: USD oc apply -f configure-metallb.yaml Example output metallb.metallb.io/metallb created Create a IPAddressPool resource by saving the following sample YAML content in the create-ip-address-pool.yaml file: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: metallb namespace: metallb-system spec: addresses: - 192.168.216.32-192.168.216.122 1 1 Create an address pool with an available range of IP addresses within the node network. Replace the IP address range with an unused pool of available IP addresses in your network. Apply the YAML content by entering the following command: USD oc apply -f create-ip-address-pool.yaml Example output ipaddresspool.metallb.io/metallb created Create a L2Advertisement resource by saving the following sample YAML content in the l2advertisement.yaml file: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - metallb Apply the YAML content by entering the following command: USD oc apply -f l2advertisement.yaml Example output l2advertisement.metallb.io/metallb created Additional resources For more information about MetalLB, see Installing the MetalLB Operator . 4.3.7. Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools If you need to configure additional networks for node pools, request a guaranteed CPU access for Virtual Machines (VMs), or manage scheduling of KubeVirt VMs, see the following procedures. 4.3.7.1. Adding multiple networks to a node pool By default, nodes generated by a node pool are attached to the pod network. You can attach additional networks to the nodes by using Multus and NetworkAttachmentDefinitions. Procedure To add multiple networks to nodes, use the --additional-network argument by running the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --additional-network name:<namespace/name> \ 6 --additional-network name:<namespace/name> 1 Specify the name of your hosted cluster, for instance, example . 2 Specify your worker node count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify the memory value, for example, 8Gi . 5 Specify the CPU value, for example, 2 . 6 Set the value of the -additional-network argument to name:<namespace/name> . Replace <namespace/name> with a namespace and name of your NetworkAttachmentDefinitions. 4.3.7.1.1. Using an additional network as default You can add your additional network as a default network for the nodes by disabling the default pod network. Procedure To add an additional network as default to your nodes, run the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --attach-default-network false \ 6 --additional-network name:<namespace>/<network_name> 7 1 Specify the name of your hosted cluster, for instance, example . 2 Specify your worker node count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify the memory value, for example, 8Gi . 5 Specify the CPU value, for example, 2 . 6 The --attach-default-network false argument disables the default pod network. 7 Specify the additional network that you want to add to your nodes, for example, name:my-namespace/my-network . 4.3.7.2. Requesting guaranteed CPU resources By default, KubeVirt VMs might share its CPUs with other workloads on a node. This might impact performance of a VM. To avoid the performance impact, you can request a guaranteed CPU access for VMs. Procedure To request guaranteed CPU resources, set the --qos-class argument to Guaranteed by running the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --qos-class Guaranteed 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify your worker node count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify the memory value, for example, 8Gi . 5 Specify the CPU value, for example, 2 . 6 The --qos-class Guaranteed argument guarantees that the specified number of CPU resources are assigned to VMs. 4.3.7.3. Scheduling KubeVirt VMs on a set of nodes By default, KubeVirt VMs created by a node pool are scheduled to any available nodes. You can schedule KubeVirt VMs on a specific set of nodes that has enough capacity to run the VM. Procedure To schedule KubeVirt VMs within a node pool on a specific set of nodes, use the --vm-node-selector argument by running the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --vm-node-selector <label_key>=<label_value>,<label_key>=<label_value> 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify your worker node count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify the memory value, for example, 8Gi . 5 Specify the CPU value, for example, 2 . 6 The --vm-node-selector flag defines a specific set of nodes that contains the key-value pairs. Replace <label_key> and <label_value> with the key and value of your labels respectively. 4.3.8. Scaling a node pool You can manually scale a node pool by using the oc scale command. Procedure Run the following command: NODEPOOL_NAME=USD{CLUSTER_NAME}-work NODEPOOL_REPLICAS=5 USD oc scale nodepool/USDNODEPOOL_NAME --namespace clusters \ --replicas=USDNODEPOOL_REPLICAS After a few moments, enter the following command to see the status of the node pool: USD oc --kubeconfig USDCLUSTER_NAME-kubeconfig get nodes Example output NAME STATUS ROLES AGE VERSION example-9jvnf Ready worker 97s v1.27.4+18eadca example-n6prw Ready worker 116m v1.27.4+18eadca example-nc6g4 Ready worker 117m v1.27.4+18eadca example-thp29 Ready worker 4m17s v1.27.4+18eadca example-twxns Ready worker 88s v1.27.4+18eadca 4.3.8.1. Adding node pools You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as memory and CPU requirements. Procedure To create a node pool, enter the following information. In this example, the node pool has more CPUs assigned to the VMs: export NODEPOOL_NAME=USD{CLUSTER_NAME}-extra-cpu export WORKER_COUNT="2" export MEM="6Gi" export CPU="4" export DISK="16" USD hcp create nodepool kubevirt \ --cluster-name USDCLUSTER_NAME \ --name USDNODEPOOL_NAME \ --node-count USDWORKER_COUNT \ --memory USDMEM \ --cores USDCPU \ --root-volume-size USDDISK Check the status of the node pool by listing nodepool resources in the clusters namespace: USD oc get nodepools --namespace clusters Example output NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. After some time, you can check the status of the node pool by entering the following command: USD oc --kubeconfig USDCLUSTER_NAME-kubeconfig get nodes Example output NAME STATUS ROLES AGE VERSION example-9jvnf Ready worker 97s v1.27.4+18eadca example-n6prw Ready worker 116m v1.27.4+18eadca example-nc6g4 Ready worker 117m v1.27.4+18eadca example-thp29 Ready worker 4m17s v1.27.4+18eadca example-twxns Ready worker 88s v1.27.4+18eadca example-extra-cpu-zh9l5 Ready worker 2m6s v1.27.4+18eadca example-extra-cpu-zr8mj Ready worker 102s v1.27.4+18eadca Verify that the node pool is in the status that you expect by entering this command: USD oc get nodepools --namespace clusters Example output NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 2 False False <4.x.0> Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. Additional resources To scale down the data plane to zero, see Scaling down the data plane to zero . 4.3.9. Verifying hosted cluster creation on OpenShift Virtualization To verify that your hosted cluster was successfully created, complete the following steps. Procedure Verify that the HostedCluster resource transitioned to the completed state by entering the following command: USD oc get --namespace clusters hostedclusters <hosted_cluster_name> Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example 4.12.2 example-admin-kubeconfig Completed True False The hosted control plane is available Verify that all the cluster operators in the hosted cluster are online by entering the following commands: USD hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig USD oc get co --kubeconfig=<hosted_cluster_name>-kubeconfig Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.12.2 True False False 2m38s csi-snapshot-controller 4.12.2 True False False 4m3s dns 4.12.2 True False False 2m52s image-registry 4.12.2 True False False 2m8s ingress 4.12.2 True False False 22m kube-apiserver 4.12.2 True False False 23m kube-controller-manager 4.12.2 True False False 23m kube-scheduler 4.12.2 True False False 23m kube-storage-version-migrator 4.12.2 True False False 4m52s monitoring 4.12.2 True False False 69s network 4.12.2 True False False 4m3s node-tuning 4.12.2 True False False 2m22s openshift-apiserver 4.12.2 True False False 23m openshift-controller-manager 4.12.2 True False False 23m openshift-samples 4.12.2 True False False 2m15s operator-lifecycle-manager 4.12.2 True False False 22m operator-lifecycle-manager-catalog 4.12.2 True False False 23m operator-lifecycle-manager-packageserver 4.12.2 True False False 23m service-ca 4.12.2 True False False 4m41s storage 4.12.2 True False False 4m43s 4.4. Deploying hosted control planes on non-bare-metal agent machines You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. The hosting cluster is an OpenShift Container Platform cluster where the control planes are hosted. The hosting cluster is also known as the management cluster. Important Hosted control planes on non-bare-metal agent machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The management cluster is not the same thing as the managed cluster. A managed cluster is a cluster that the hub cluster manages. The hosted control planes feature is enabled by default. The multicluster engine Operator supports only the default local-cluster managed hub cluster. On Red Hat Advanced Cluster Management (RHACM) 2.10, you can use the local-cluster managed hub cluster as the hosting cluster. A hosted cluster is an OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the hosting cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hcp command-line interface (CLI) to create a hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator". 4.4.1. Preparing to deploy hosted control planes on non-bare-metal agent machines As you prepare to deploy hosted control planes on bare metal, consider the following information: You can add agent machines as a worker node to a hosted cluster by using the Agent platform. Agent machine represents a host booted with a Discovery Image and ready to be provisioned as an OpenShift Container Platform node. The Agent platform is part of the central infrastructure management service. For more information, see Enabling the central infrastructure management service . All hosts that are not bare metal require a manual boot with a Discovery Image ISO that the central infrastructure management provides. When you scale up the node pool, a machine is created for every replica. For every machine, the Cluster API provider finds and installs an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions. When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image. When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control planes etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using logical volume manager storage" in the OpenShift Container Platform documentation. 4.4.1.1. Prerequisites for deploying hosted control planes on non-bare-metal agent machines Before you deploy hosted control planes on non-bare-metal agent machines, ensure you meet the following prerequisites: You must have multicluster engine for Kubernetes Operator 2.5 or later installed on an OpenShift Container Platform cluster. You can install the multicluster engine Operator as an Operator from the OpenShift Container Platform OperatorHub. You must have at least one managed OpenShift Container Platform cluster for the multicluster engine Operator. The local-cluster management cluster is automatically imported. For more information about the local-cluster , see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your management cluster by running the following command: USD oc get managedclusters local-cluster You have enabled central infrastructure management. For more information, see Enabling the central infrastructure management service in the Red Hat Advanced Cluster Management documentation. You have installed the hcp command-line interface. Your hosted cluster has a cluster-wide unique name. You are running the management cluster and workers on the same infrastructure. Additional resources Advanced configuration Enabling the central infrastructure management service 4.4.1.2. Firewall, port, and service requirements for non-bare-metal agent machines You must meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters. Note Services run on their default ports. However, if you use the NodePort publishing strategy, services run on the port that is assigned by the NodePort service. Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address. A hosted control plane exposes the following services on non-bare-metal agent machines: APIServer The APIServer service runs on port 6443 by default and requires ingress access for communication between the control plane components. If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses. OAuthServer The OAuthServer service runs on port 443 by default when you use the route and ingress to expose the service. If you use the NodePort publishing strategy, use a firewall rule for the OAuthServer service. Konnectivity The Konnectivity service runs on port 443 by default when you use the route and ingress to expose the service. The Konnectivity agent establishes a reverse tunnel to allow the control plane to access the network for the hosted cluster. The agent uses egress to connect to the Konnectivity server. The server is exposed by using either a route on port 443 or a manually assigned NodePort . If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443. If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes. Ignition The Ignition service runs on port 443 by default when you use the route and ingress to expose the service. If you use the NodePort publishing strategy, use a firewall rule for the Ignition service. You do not need the following services on non-bare-metal agent machines: OVNSbDb OIDC 4.4.1.3. Infrastructure requirements for non-bare-metal agent machines The Agent platform does not create any infrastructure, but it has the following infrastructure requirements: Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OpenShift Container Platform node. DNS: The API and ingress endpoints must be routable. Additional resources Recommended etcd practices Persistent storage using logical volume manager storage Disabling the automatic import of hosted clusters into multicluster engine Operator Manually enabling the hosted control planes feature Disabling the hosted control planes feature Configuring Ansible Automation Platform jobs to run on hosted clusters 4.4.2. Configuring DNS on non-bare-metal agent machines The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<basedomain> that points to destination where the API Server can be reached. The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. If you are configuring DNS for a connected environment on an IPv4 network, see the following example DNS configuration: api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23 If you are configuring DNS for a disconnected environment on an IPv6 network, see the following example DNS configuration: api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10 If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6. See the following example DNS configuration: host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9] 4.4.3. Creating a hosted cluster on non-bare-metal agent machines by using the CLI When you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one. As you create a hosted cluster, review the following guidelines: Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. Procedure Create the hosted control plane namespace by entering the following command: USD oc create ns <hosted_cluster_namespace>-<hosted_cluster_name> 1 1 Replace <hosted_cluster_namespace> with your hosted cluster namespace name, for example, clusters . Replace <hosted_cluster_name> with your hosted cluster name. Create a hosted cluster by entering the following command: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ 5 --etcd-storage-class=<etcd_storage_class> \ 6 --ssh-key <path_to_ssh_key> \ 7 --namespace <hosted_cluster_namespace> \ 8 --control-plane-availability-policy HighlyAvailable \ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release> \ 10 --node-pool-replicas <node_pool_replica_count> 11 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the path to your pull secret, for example, /user/name/pullsecret . 3 Specify your hosted control plane namespace, for example, clusters-example . Ensure that agents are available in this namespace by using the oc get agent -n <hosted-control-plane-namespace> command. 4 Specify your base domain, for example, krnl.es . 5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster. 6 Verify that you have a default storage class configured for your cluster. Otherwise, you might end up with pending PVCs. Specify the etcd storage class name, for example, lvm-storageclass . 7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 8 Specify your hosted cluster namespace. 9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable . The default value is HighlyAvailable . 10 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . 11 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. Verification After a few moments, verify that your hosted control plane pods are up and running by entering the following command: USD oc -n <hosted_control_plane_namespace> get pods Example output NAME READY STATUS RESTARTS AGE catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s control-plane-operator-f6b4c8465-4k5dh 1/1 Running 0 4m32s Additional resources Manually importing a hosted cluster 4.4.3.1. Creating a hosted cluster on non-bare-metal agent machines by using the web console You can create a hosted cluster on non-bare-metal agent machines by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Open the OpenShift Container Platform web console and log in by entering your administrator credentials. In the console header, select All Clusters . Click Infrastructure Clusters . Click Create cluster Host inventory Hosted control plane . The Create cluster page is displayed. On the Create cluster page, follow the prompts to enter details about the cluster, node pools, networking, and automation. As you enter details about the cluster, you might find the following tips useful: If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment . On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated. On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace. On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.<hosted_cluster_name>.<basedomain> setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods. Review your entries and click Create . The Hosted cluster view is displayed. Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name. Wait until the control plane components are ready. This process can take a few minutes. To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster. steps To access the web console, see Accessing the web console . 4.4.3.2. Creating a hosted cluster on bare metal by using a mirror registry You can use a mirror registry to create a hosted cluster on bare metal by specifying the --image-content-sources flag in the hcp create cluster command. Procedure Create a YAML file to define Image Content Source Policies (ICSP). See the following example: - mirrors: - brew.registry.redhat.io source: registry.redhat.io - mirrors: - brew.registry.redhat.io source: registry.stage.redhat.io - mirrors: - brew.registry.redhat.io source: registry-proxy.engineering.redhat.com Save the file as icsp.yaml . This file contains your mirror registries. To create a hosted cluster by using your mirror registries, run the following command: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ 5 --image-content-sources icsp.yaml \ 6 --ssh-key <path_to_ssh_key> \ 7 --namespace <hosted_cluster_namespace> \ 8 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> 9 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the path to your pull secret, for example, /user/name/pullsecret . 3 Specify your hosted control plane namespace, for example, clusters-example . Ensure that agents are available in this namespace by using the oc get agent -n <hosted-control-plane-namespace> command. 4 Specify your base domain, for example, krnl.es . 5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster. 6 Specify the icsp.yaml file that defines ICSP and your mirror registries. 7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 8 Specify your hosted cluster namespace. 9 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . steps To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment . To access a hosted cluster, see Accessing the hosted cluster . To add hosts to the host inventory by using the Discovery Image, see Adding hosts to the host inventory by using the Discovery Image . To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . 4.4.4. Verifying hosted cluster creation on non-bare-metal agent machines After the deployment process is complete, you can verify that the hosted cluster was created successfully. Follow these steps a few minutes after you create the hosted cluster. Procedure Obtain the kubeconfig file for your new hosted cluster by entering the following command: USD oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig --to=- \ > kubeconfig-<hosted_cluster_name> Use the kubeconfig file to view the cluster Operators of the hosted cluster. Enter the following command: USD oc get co --kubeconfig=kubeconfig-<hosted_cluster_name> Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s csi-snapshot-controller 4.10.26 True False False 4m3s dns 4.10.26 True False False 2m52s View the running pods on your hosted cluster by entering the following command: USD oc get pods -A --kubeconfig=kubeconfig-<hosted_cluster_name> Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system konnectivity-agent-khlqv 0/1 Running 0 3m52s openshift-cluster-samples-operator cluster-samples-operator-6b5bcb9dff-kpnbc 2/2 Running 0 20m openshift-monitoring alertmanager-main-0 6/6 Running 0 100s openshift-monitoring openshift-state-metrics-677b9fb74f-qqp6g 3/3 Running 0 104s 4.5. Deploying hosted control planes on IBM Z You can deploy hosted control planes by configuring a cluster to function as a management cluster. The management cluster is the OpenShift Container Platform cluster where the control planes are hosted. The management cluster is also known as the hosting cluster. Note The management cluster is not the managed cluster. A managed cluster is a cluster that the hub cluster manages. You can convert a managed cluster to a management cluster by using the hypershift add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster. The multicluster engine Operator supports only the default local-cluster , which is a hub cluster that is managed, and the hub cluster as the management cluster. To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service". Each IBM Z system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host. When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. 4.5.1. Prerequisites to configure hosted control planes on IBM Z The multicluster engine for Kubernetes Operator version 2.5 or later must be installed on an OpenShift Container Platform cluster. You can install multicluster engine Operator as an Operator from the OpenShift Container Platform OperatorHub. The multicluster engine Operator must have at least one managed OpenShift Container Platform cluster. The local-cluster is automatically imported in multicluster engine Operator 2.5 and later. For more information about the local-cluster , see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command: USD oc get managedclusters local-cluster You need a hosting cluster with at least three worker nodes to run the HyperShift Operator. You need to enable the central infrastructure management service. For more information, see Enabling the central infrastructure management service . You need to install the hosted control plane command line interface. For more information, see Installing the hosted control plane command line interface . Additional resources Advanced configuration Enabling the central infrastructure management service Installing the hosted control planes command-line interface Enabling or disabling the hosted control planes feature 4.5.2. IBM Z infrastructure requirements The Agent platform does not create any infrastructure, but requires the following resources for infrastructure: Agents: An Agent represents a host that is booted with a discovery image, or PXE image and is ready to be provisioned as an OpenShift Container Platform node. DNS: The API and Ingress endpoints must be routable. The hosted control planes feature is enabled by default. If you disabled the feature and want to manually enable it, or if you need to disable the feature, see Enabling or disabling the hosted control planes feature . Additional resources Enabling or disabling the hosted control planes feature 4.5.3. DNS configuration for hosted control planes on IBM Z The API server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for the api.<hosted_cluster_name>.<base_domain> that points to the destination where the API server is reachable. The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer deployed to redirect incoming traffic to the Ingress pods. See the following example of a DNS configuration: USD cat /var/named/<example.krnl.es.zone> Example output USD TTL 900 @ IN SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. ( 2019062002 1D 1H 1W 3H ) IN NS bastion.example.krnl.es.com. ; ; api IN A 1xx.2x.2xx.1xx 1 api-int IN A 1xx.2x.2xx.1xx ; ; *.apps IN A 1xx.2x.2xx.1xx ; ;EOF 1 The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes. For IBM z/VM, add IP addresses that correspond to the IP address of the agent. compute-0 IN A 1xx.2x.2xx.1yy compute-1 IN A 1xx.2x.2xx.1yy 4.5.4. Creating a hosted cluster on bare metal When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one. As you create a hosted cluster, keep the following guidelines in mind: Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. The most common service publishing strategy is to expose services through a load balancer. That strategy is the preferred method for exposing the Kubernetes API server. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the servicePublishingStrategy information in the HostedCluster custom resource. Procedure Create the hosted control plane namespace by entering the following command: USD oc create ns <hosted_cluster_namespace>-<hosted_cluster_name> Replace <hosted_cluster_namespace> with your hosted cluster namespace name, for example, clusters . Replace <hosted_cluster_name> with your hosted cluster name. Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending PVCs. Run the following command: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ 5 --etcd-storage-class=<etcd_storage_class> \ 6 --ssh-key <path_to_ssh_public_key> \ 7 --namespace <hosted_cluster_namespace> \ 8 --control-plane-availability-policy HighlyAvailable \ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 10 --node-pool-replicas <node_pool_replica_count> 11 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the path to your pull secret, for example, /user/name/pullsecret . 3 Specify your hosted control plane namespace, for example, clusters-example . Ensure that agents are available in this namespace by using the oc get agent -n <hosted_control_plane_namespace> command. 4 Specify your base domain, for example, krnl.es . 5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster. 6 Specify the etcd storage class name, for example, lvm-storageclass . 7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 8 Specify your hosted cluster namespace. 9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable . The default value is HighlyAvailable . 10 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . 11 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. After a few moments, verify that your hosted control plane pods are up and running by entering the following command: USD oc -n <hosted_control_plane_namespace> get pods Example output NAME READY STATUS RESTARTS AGE capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s 4.5.5. Creating an InfraEnv resource for hosted control planes on IBM Z An InfraEnv is an environment where hosts that are booted with PXE images can join as agents. In this case, the agents are created in the same namespace as your hosted control plane. Procedure Create a YAML file to contain the configuration. See the following example: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted_control_plane_namespace> spec: cpuArchitecture: s390x pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> Save the file as infraenv-config.yaml . Apply the configuration by entering the following command: USD oc apply -f infraenv-config.yaml To fetch the URL to download the PXE images, such as, initrd.img , kernel.img , or rootfs.img , which allows IBM Z machines to join as agents, enter the following command: USD oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json 4.5.6. Adding IBM Z agents to the InfraEnv resource To attach compute nodes to a hosted control plane, create agents that help you to scale the node pool. Adding agents in an IBM Z environment requires additional steps, which are described in detail in this section. Unless stated otherwise, these procedures apply to both z/VM and RHEL KVM installations on IBM Z and IBM LinuxONE. 4.5.6.1. Adding IBM Z KVM as agents For IBM Z with KVM, run the following command to start your IBM Z environment with the downloaded PXE images from the InfraEnv resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the InfraEnv resource on the management cluster. Procedure Run the following command: virt-install \ --name "<vm_name>" \ 1 --autostart \ --ram=16384 \ --cpu host \ --vcpus=4 \ --location "<path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img" \ 2 --disk <qcow_image_path> \ 3 --network network:macvtap-net,mac=<mac_address> \ 4 --graphics none \ --noautoconsole \ --wait=-1 --extra-args "rd.neednet=1 nameserver=<nameserver> coreos.live.rootfs_url=http://<http_server>/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" 5 1 Specify the name of the virtual machine. 2 Specify the location of the kernel_initrd_image file. 3 Specify the disk image path. 4 Specify the Mac address. 5 Specify the server name of the agents. For ISO boot, download ISO from the InfraEnv resource and boot the nodes by running the following command: virt-install \ --name "<vm_name>" \ 1 --autostart \ --memory=16384 \ --cpu host \ --vcpus=4 \ --network network:macvtap-net,mac=<mac_address> \ 2 --cdrom "<path_to_image.iso>" \ 3 --disk <qcow_image_path> \ --graphics none \ --noautoconsole \ --os-variant <os_version> \ 4 --wait=-1 1 Specify the name of the virtual machine. 2 Specify the Mac address. 3 Specify the location of the image.iso file. 4 Specify the operating system version that you are using. 4.5.6.2. Adding IBM Z LPAR as agents You can add the Logical Partition (LPAR) on IBM Z or IBM LinuxONE as a compute node to a hosted control plane. Procedure Create a boot parameter file for the agents: Example parameter file rd.neednet=1 cio_ignore=all,!condev \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \ 4 random.trust_cpu=on rd.luks.options=discard 1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported. 2 For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE . 3 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. 4 Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets. Download the .ins and initrd.img.addrsize files from the InfraEnv resource. By default, the URL for the .ins and initrd.img.addrsize files is not available in the InfraEnv resource. You must edit the URL to fetch those artifacts. Update the kernel URL endpoint to include ins-file by running the followign command: USD curl -k -L -o generic.ins "< url for ins-file >" Example URL https://.../boot-artifacts/ins-file?arch=s390x&version=4.17.0 Update the initrd URL endpoint to include s390x-initrd-addrsize : Example URL https://..../s390x-initrd-addrsize?api_key=<api-key>&arch=s390x&version=4.17.0 Transfer the initrd , kernel , generic.ins , and initrd.img.addrsize parameter files to the file server. For more information about how to transfer the files with FTP and boot, see "Installing in an LPAR". Start the machine. Repeat the procedure for all other machines in the cluster. Additional resources Installing in an LPAR 4.5.6.3. Adding IBM z/VM as agents If you want to use a static IP for z/VM guest, you must configure the NMStateConfig attribute for the z/VM agent so that the IP parameter persists in the second start. Complete the following steps to start your IBM Z environment with the downloaded PXE images from the InfraEnv resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the InfraEnv resource on the management cluster. Procedure Update the parameter file to add the rootfs_url , network_adaptor and disk_type values. Example parameter file rd.neednet=1 cio_ignore=all,!condev \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \ 4 1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported. 2 For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE . 3 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. 4 Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets. Move initrd , kernel images, and the parameter file to the guest VM by running the following commands: vmur pun -r -u -N kernel.img USDINSTALLERKERNELLOCATION/<image name> vmur pun -r -u -N generic.parm USDPARMFILELOCATION/paramfilename vmur pun -r -u -N initrd.img USDINSTALLERINITRAMFSLOCATION/<image name> Run the following command from the guest VM console: cp ipl c To list the agents and their properties, enter the following command: USD oc -n <hosted_control_plane_namespace> get agents Example output NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a auto-assign Run the following command to approve the agent. USD oc -n <hosted_control_plane_namespace> patch agent \ 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d -p \ '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-zvm-0.hostedn.example.com"}}' \ 1 --type merge 1 Optionally, you can set the agent ID <installation_disk_id> and <hostname> in the specification. Run the following command to verify that the agents are approved: USD oc -n <hosted_control_plane_namespace> get agents Example output NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign 4.5.7. Scaling the NodePool object for a hosted cluster on IBM Z The NodePool object is created when you create a hosted cluster. By scaling the NodePool object, you can add more compute nodes to the hosted control plane. When you scale up a node pool, a machine is created. The Cluster API provider finds an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions. When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you reuse the clusters, you must boot the clusters by using the PXE image to update the number of nodes. Procedure Run the following command to scale the NodePool object to two nodes: USD oc -n <clusters_namespace> scale nodepool <nodepool_name> --replicas 2 The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through the transition phases in the following order: binding discovering insufficient installing installing-in-progress added-to-existing-cluster Run the following command to see the status of a specific scaled agent: USD oc -n <hosted_control_plane_namespace> get agent -o \ jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} \ Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}' Example output BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient Run the following command to see the transition phases: USD oc -n <hosted_control_plane_namespace> get agent Example output NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign Run the following command to generate the kubeconfig file to access the hosted cluster: USD hcp create kubeconfig \ --namespace <clusters_namespace> \ --name <hosted_cluster_namespace> > <hosted_cluster_name>.kubeconfig After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes by entering the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes Example output NAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f Cluster Operators start to reconcile by adding workloads to the nodes. Enter the following command to verify that two machines were created when you scaled up the NodePool object: USD oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0 Run the following command to check the cluster version: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0-ec.2 True False 40h Cluster version is 4.15.0-ec.2 Run the following command to check the cluster operator status: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators For each component of your cluster, the output shows the following cluster operator statuses: NAME , VERSION , AVAILABLE , PROGRESSING , DEGRADED , SINCE , and MESSAGE . For an output example, see Initial Operator configuration . Additional resources Initial Operator configuration 4.6. Deploying hosted control planes on IBM Power You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. The hosting cluster is an OpenShift Container Platform cluster where the control planes are hosted. The hosting cluster is also known as the management cluster. Note The management cluster is not the managed cluster. A managed cluster is a cluster that the hub cluster manages. The multicluster engine Operator supports only the default local-cluster , which is a hub cluster that is managed, and the hub cluster as the hosting cluster. To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service". Each IBM Power host must be started with a Discovery Image that the central infrastructure management provides. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host. When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace. 4.6.1. Prerequisites to configure hosted control planes on IBM Power The multicluster engine for Kubernetes Operator version 2.7 and later installed on an OpenShift Container Platform cluster. The multicluster engine Operator is automatically installed when you install Red Hat Advanced Cluster Management (RHACM). You can also install the multicluster engine Operator without RHACM as an Operator from the OpenShift Container Platform OperatorHub. The multicluster engine Operator must have at least one managed OpenShift Container Platform cluster. The local-cluster managed hub cluster is automatically imported in the multicluster engine Operator version 2.7 and later. For more information about local-cluster , see Advanced configuration in the RHACM documentation. You can check the status of your hub cluster by running the following command: USD oc get managedclusters local-cluster You need a hosting cluster with at least 3 worker nodes to run the HyperShift Operator. You need to enable the central infrastructure management service. For more information, see "Enabling the central infrastructure management service". You need to install the hosted control plane command-line interface. For more information, see "Installing the hosted control plane command-line interface". The hosted control planes feature is enabled by default. If you disabled the feature and want to manually enable it, see "Manually enabling the hosted control planes feature". If you need to disable the feature, see "Disabling the hosted control planes feature". Additional resources Advanced configuration Enabling the central infrastructure management service Installing the hosted control planes command-line interface Manually enabling the hosted control planes feature Disabling the hosted control planes feature 4.6.2. IBM Power infrastructure requirements The Agent platform does not create any infrastructure, but requires the following resources for infrastructure: Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OpenShift Container Platform node. DNS: The API and Ingress endpoints must be routable. 4.6.3. DNS configuration for hosted control planes on IBM Power The API server for the hosted cluster is exposed. A DNS entry must exist for the api.<hosted_cluster_name>.<basedomain> entry that points to the destination where the API server is reachable. The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. See the following example of a DNS configuration: USD cat /var/named/<example.krnl.es.zone> Example output USD TTL 900 @ IN SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. ( 2019062002 1D 1H 1W 3H ) IN NS bastion.example.krnl.es.com. ; ; api IN A 1xx.2x.2xx.1xx 1 api-int IN A 1xx.2x.2xx.1xx ; ; *.apps.<hosted-cluster-name>.<basedomain> IN A 1xx.2x.2xx.1xx ; ;EOF 1 The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes. For IBM Power, add IP addresses that correspond to the IP address of the agent. Example configuration compute-0 IN A 1xx.2x.2xx.1yy compute-1 IN A 1xx.2x.2xx.1yy 4.6.4. Creating a hosted cluster on bare metal When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one. As you create a hosted cluster, keep the following guidelines in mind: Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. The most common service publishing strategy is to expose services through a load balancer. That strategy is the preferred method for exposing the Kubernetes API server. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the servicePublishingStrategy information in the HostedCluster custom resource. Procedure Create the hosted control plane namespace by entering the following command: USD oc create ns <hosted_cluster_namespace>-<hosted_cluster_name> Replace <hosted_cluster_namespace> with your hosted cluster namespace name, for example, clusters . Replace <hosted_cluster_name> with your hosted cluster name. Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending PVCs. Run the following command: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ 5 --etcd-storage-class=<etcd_storage_class> \ 6 --ssh-key <path_to_ssh_public_key> \ 7 --namespace <hosted_cluster_namespace> \ 8 --control-plane-availability-policy HighlyAvailable \ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 10 --node-pool-replicas <node_pool_replica_count> 11 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the path to your pull secret, for example, /user/name/pullsecret . 3 Specify your hosted control plane namespace, for example, clusters-example . Ensure that agents are available in this namespace by using the oc get agent -n <hosted_control_plane_namespace> command. 4 Specify your base domain, for example, krnl.es . 5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster. 6 Specify the etcd storage class name, for example, lvm-storageclass . 7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 8 Specify your hosted cluster namespace. 9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable . The default value is HighlyAvailable . 10 Specify the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest . 11 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. After a few moments, verify that your hosted control plane pods are up and running by entering the following command: USD oc -n <hosted_control_plane_namespace> get pods Example output NAME READY STATUS RESTARTS AGE capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s
[ "oc get managedclusters local-cluster", "aws s3api create-bucket --bucket <bucket_name> \\ 1 --create-bucket-configuration LocationConstraint=<region> \\ 2 --region <region> 3", "aws s3api delete-public-access-block --bucket <bucket_name> 1", "echo '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": \"*\", \"Action\": \"s3:GetObject\", \"Resource\": \"arn:aws:s3:::<bucket_name>/*\" 1 } ] }' | envsubst > policy.json", "aws s3api put-bucket-policy --bucket <bucket_name> \\ 1 --policy file://policy.json", "oc create secret generic <secret_name> --from-file=credentials=<path>/.aws/credentials --from-literal=bucket=<s3_bucket> --from-literal=region=<region> -n local-cluster", "oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster cluster.open-cluster-management.io/backup=true", "aws route53 create-hosted-zone --name <basedomain> \\ 1 --caller-reference USD(whoami)-USD(date --rfc-3339=date)", "aws sts get-caller-identity --query \"Arn\" --output text", "arn:aws:iam::1234567890:user/<aws_username>", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"<arn>\" 1 }, \"Action\": \"sts:AssumeRole\" } ] }", "aws iam create-role --role-name <name> \\ 1 --assume-role-policy-document file://<file_name>.json \\ 2 --query \"Role.Arn\"", "arn:aws:iam::820196288204:role/myrole", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"EC2\", \"Effect\": \"Allow\", \"Action\": [ \"ec2:CreateDhcpOptions\", \"ec2:DeleteSubnet\", \"ec2:ReplaceRouteTableAssociation\", \"ec2:DescribeAddresses\", \"ec2:DescribeInstances\", \"ec2:DeleteVpcEndpoints\", \"ec2:CreateNatGateway\", \"ec2:CreateVpc\", \"ec2:DescribeDhcpOptions\", \"ec2:AttachInternetGateway\", \"ec2:DeleteVpcEndpointServiceConfigurations\", \"ec2:DeleteRouteTable\", \"ec2:AssociateRouteTable\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeAvailabilityZones\", \"ec2:CreateRoute\", \"ec2:CreateInternetGateway\", \"ec2:RevokeSecurityGroupEgress\", \"ec2:ModifyVpcAttribute\", \"ec2:DeleteInternetGateway\", \"ec2:DescribeVpcEndpointConnections\", \"ec2:RejectVpcEndpointConnections\", \"ec2:DescribeRouteTables\", \"ec2:ReleaseAddress\", \"ec2:AssociateDhcpOptions\", \"ec2:TerminateInstances\", \"ec2:CreateTags\", \"ec2:DeleteRoute\", \"ec2:CreateRouteTable\", \"ec2:DetachInternetGateway\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeNatGateways\", \"ec2:DisassociateRouteTable\", \"ec2:AllocateAddress\", \"ec2:DescribeSecurityGroups\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:CreateVpcEndpoint\", \"ec2:DescribeVpcs\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteDhcpOptions\", \"ec2:DeleteNatGateway\", \"ec2:DescribeVpcEndpoints\", \"ec2:DeleteVpc\", \"ec2:CreateSubnet\", \"ec2:DescribeSubnets\" ], \"Resource\": \"*\" }, { \"Sid\": \"ELB\", \"Effect\": \"Allow\", \"Action\": [ \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DeleteTargetGroup\" ], \"Resource\": \"*\" }, { \"Sid\": \"IAMPassRole\", \"Effect\": \"Allow\", \"Action\": \"iam:PassRole\", \"Resource\": \"arn:*:iam::*:role/*-worker-role\", \"Condition\": { \"ForAnyValue:StringEqualsIfExists\": { \"iam:PassedToService\": \"ec2.amazonaws.com\" } } }, { \"Sid\": \"IAM\", \"Effect\": \"Allow\", \"Action\": [ \"iam:CreateInstanceProfile\", \"iam:DeleteInstanceProfile\", \"iam:GetRole\", \"iam:UpdateAssumeRolePolicy\", \"iam:GetInstanceProfile\", \"iam:TagRole\", \"iam:RemoveRoleFromInstanceProfile\", \"iam:CreateRole\", \"iam:DeleteRole\", \"iam:PutRolePolicy\", \"iam:AddRoleToInstanceProfile\", \"iam:CreateOpenIDConnectProvider\", \"iam:ListOpenIDConnectProviders\", \"iam:DeleteRolePolicy\", \"iam:UpdateRole\", \"iam:DeleteOpenIDConnectProvider\", \"iam:GetRolePolicy\" ], \"Resource\": \"*\" }, { \"Sid\": \"Route53\", \"Effect\": \"Allow\", \"Action\": [ \"route53:ListHostedZonesByVPC\", \"route53:CreateHostedZone\", \"route53:ListHostedZones\", \"route53:ChangeResourceRecordSets\", \"route53:ListResourceRecordSets\", \"route53:DeleteHostedZone\", \"route53:AssociateVPCWithHostedZone\", \"route53:ListHostedZonesByName\" ], \"Resource\": \"*\" }, { \"Sid\": \"S3\", \"Effect\": \"Allow\", \"Action\": [ \"s3:ListAllMyBuckets\", \"s3:ListBucket\", \"s3:DeleteObject\", \"s3:DeleteBucket\" ], \"Resource\": \"*\" } ] }", "aws iam put-role-policy --role-name <role_name> \\ 1 --policy-name <policy_name> \\ 2 --policy-document file://policy.json 3", "aws sts get-session-token --output json > sts-creds.json", "{ \"Credentials\": { \"AccessKeyId\": \"ASIA1443CE0GN2ATHWJU\", \"SecretAccessKey\": \"XFLN7cZ5AP0d66KhyI4gd8Mu0UCQEDN9cfelW1\", \"SessionToken\": \"IQoJb3JpZ2luX2VjEEAaCXVzLWVhc3QtMiJHMEUCIDyipkM7oPKBHiGeI0pMnXst1gDLfs/TvfskXseKCbshAiEAnl1l/Html7Iq9AEIqf////KQburfkq4A3TuppHMr/9j1TgCj1z83SO261bHqlJUazKoy7vBFR/a6LHt55iMBqtKPEsIWjBgj/jSdRJI3j4Gyk1//luKDytcfF/tb9YrxDTPLrACS1lqAxSIFZ82I/jDhbDs=\", \"Expiration\": \"2025-05-16T04:19:32+00:00\" } }", "oc create secret generic <secret_name> --from-literal=aws-access-key-id=<aws_access_key_id> --from-literal=aws-secret-access-key=<aws_secret_access_key> --from-literal=region=<region> -n local-cluster", "oc label secret hypershift-operator-private-link-credentials -n local-cluster cluster.open-cluster-management.io/backup=\"\"", "oc create secret generic <secret_name> --from-literal=provider=aws --from-literal=domain-filter=<domain_name> --from-file=credentials=<path_to_aws_credentials_file> -n local-cluster", "oc label secret hypershift-operator-external-dns-credentials -n local-cluster cluster.open-cluster-management.io/backup=\"\"", "dig +short test.user-dest-public.aws.kerberos.com", "192.168.1.1", "hcp create cluster aws --name=<hosted_cluster_name> --endpoint-access=PublicAndPrivate --external-dns-domain=<public_hosted_zone> ... 1", "platform: aws: endpointAccess: PublicAndPrivate services: - service: APIServer servicePublishingStrategy: route: hostname: api-example.service-provider-domain.com type: Route - service: OAuthServer servicePublishingStrategy: route: hostname: oauth-example.service-provider-domain.com type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route", "export KUBECONFIG=<path_to_management_cluster_kubeconfig>", "oc get pod -n hypershift -lapp=external-dns", "NAME READY STATUS RESTARTS AGE external-dns-7c89788c69-rn8gp 1/1 Running 0 40s", "hcp create cluster aws --role-arn <arn_role> \\ 1 --instance-type <instance_type> \\ 2 --region <region> \\ 3 --auto-repair --generate-ssh --name <hosted_cluster_name> \\ 4 --namespace clusters --base-domain <service_consumer_domain> \\ 5 --node-pool-replicas <node_replica_count> \\ 6 --pull-secret <path_to_your_pull_secret> \\ 7 --release-image quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 8 --external-dns-domain=<service_provider_domain> \\ 9 --endpoint-access=PublicAndPrivate 10 --sts-creds <path_to_sts_credential_file> 11", "hcp create cluster aws --name <hosted_cluster_name> \\ 1 --infra-id <infra_id> \\ 2 --base-domain <basedomain> \\ 3 --sts-creds <path_to_sts_credential_file> \\ 4 --pull-secret <path_to_pull_secret> \\ 5 --region <region> \\ 6 --generate-ssh --node-pool-replicas <node_pool_replica_count> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --role-arn <role_name> \\ 9 --render-into <file_name>.yaml 10", "apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: <nodepool_name> 1 spec: platform: aws: placement: tenancy: \"default\" 2", "oc get hostedclusters -n <hosted_cluster_namespace>", "oc get nodepools --namespace <hosted_cluster_namespace>", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "hcp create cluster aws --name <hosted_cluster_name> \\ 1 --node-pool-replicas=<node_pool_replica_count> \\ 2 --base-domain <basedomain> \\ 3 --pull-secret <path_to_pull_secret> \\ 4 --role-arn <arn_role> \\ 5 --region <region> \\ 6 --zones <zones> \\ 7 --sts-creds <path_to_sts_credential_file> 8", "hcp create cluster aws --name <hosted_cluster_name> \\ 1 --node-pool-replicas <node_pool_replica_count> \\ 2 --base-domain <basedomain> \\ 3 --pull-secret <path_to_pull_secret> \\ 4 --sts-creds <path_to_sts_credential_file> \\ 5 --region <region> \\ 6 --role-arn <arn_role> 7", "hcp create cluster aws --name <hosted_cluster_name> \\ 1 --node-pool-replicas <node_pool_replica_count> \\ 2 --base-domain <basedomain> \\ 3 --pull-secret <path_to_pull_secret> \\ 4 --sts-creds <path_to_sts_credential_file> \\ 5 --region <region> \\ 6 --release-image quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 7 --role-arn <role_name> 8", "hcp create nodepool aws --cluster-name <hosted_cluster_name> \\ 1 --name <node_pool_name> \\ 2 --node-count <node_pool_replica_count> \\ 3 --arch <architecture> 4", "hcp create cluster aws --name <hosted_cluster_name> \\ 1 --node-pool-replicas=<node_pool_replica_count> \\ 2 --base-domain <basedomain> \\ 3 --pull-secret <path_to_pull_secret> \\ 4 --sts-creds <path_to_sts_credential_file> \\ 5 --region <region> \\ 6 --endpoint-access Private \\ 7 --role-arn <role_name> 8", "aws ec2 describe-instances --filter=\"Name=tag:kubernetes.io/cluster/<infra_id>,Values=owned\" | jq '.Reservations[] | .Instances[] | select(.PublicDnsName==\"\") | .PrivateIpAddress'", "hcp create kubeconfig > <hosted_cluster_kubeconfig>", "ssh -o ProxyCommand=\"ssh ec2-user@<bastion_ip> -W %h:%p\" core@<node_ip>", "mv <path_to_kubeconfig_file> <new_file_name>", "export KUBECONFIG=<path_to_kubeconfig_file>", "oc get clusteroperators clusterversion", "oc get managedclusters local-cluster", "api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23", "api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10", "host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]", "oc create ns <hosted_cluster_namespace>-<hosted_cluster_name>", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \\ 5 --etcd-storage-class=<etcd_storage_class> \\ 6 --ssh-key <path_to_ssh_public_key> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --control-plane-availability-policy HighlyAvailable \\ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 10 --node-pool-replicas <node_pool_replica_count> 11", "oc -n <hosted_control_plane_namespace> get pods", "NAME READY STATUS RESTARTS AGE capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s", "- mirrors: - brew.registry.redhat.io source: registry.redhat.io - mirrors: - brew.registry.redhat.io source: registry.stage.redhat.io - mirrors: - brew.registry.redhat.io source: registry-proxy.engineering.redhat.com", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \\ 5 --image-content-sources icsp.yaml \\ 6 --ssh-key <path_to_ssh_key> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> 9", "oc extract -n <hosted-control-plane-namespace> secret/admin-kubeconfig --to=- > kubeconfig-<hosted-cluster-name>", "oc get co --kubeconfig=kubeconfig-<hosted-cluster-name>", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s dns 4.10.26 True False False 2m52s image-registry 4.10.26 True False False 2m8s ingress 4.10.26 True False False 22m", "oc get pods -A --kubeconfig=kubeconfig-<hosted-cluster-name>", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system konnectivity-agent-khlqv 0/1 Running 0 3m52s openshift-cluster-node-tuning-operator tuned-dhw5p 1/1 Running 0 109s openshift-cluster-storage-operator cluster-storage-operator-5f784969f5-vwzgz 1/1 Running 1 (113s ago) 20m openshift-cluster-storage-operator csi-snapshot-controller-6b7687b7d9-7nrfw 1/1 Running 0 3m8s openshift-console console-5cbf6c7969-6gk6z 1/1 Running 0 119s openshift-console downloads-7bcd756565-6wj5j 1/1 Running 0 4m3s openshift-dns-operator dns-operator-77d755cd8c-xjfbn 2/2 Running 0 21m openshift-dns dns-default-kfqnh 2/2 Running 0 113s", "oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ \"op\": \"add\", \"path\": \"/spec/routeAdmission\", \"value\": {wildcardPolicy: \"WildcardsAllowed\"}}]'", "oc patch storageclass ocs-storagecluster-ceph-rbd -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'", "oc get managedclusters local-cluster", "- lastTransitionTime: \"2024-10-08T15:38:19Z\" message: | 3 of 3 machines are not live migratable Machine user-np-ngst4-gw2hz: DisksNotLiveMigratable: user-np-ngst4-gw2hz is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-gw2hz-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode) Machine user-np-ngst4-npq7x: DisksNotLiveMigratable: user-np-ngst4-npq7x is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-npq7x-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode) Machine user-np-ngst4-q5nkb: DisksNotLiveMigratable: user-np-ngst4-q5nkb is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-q5nkb-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode) observedGeneration: 1 reason: DisksNotLiveMigratable status: \"False\" type: KubeVirtNodesLiveMigratable", "- lastTransitionTime: \"2024-10-08T15:38:19Z\" message: \"All is well\" observedGeneration: 1 reason: AsExpected status: \"True\" type: KubeVirtNodesLiveMigratable", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <node_pool_replica_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <value_for_memory> \\ 4 --cores <value_for_cpu> \\ 5 --etcd-storage-class=<etcd_storage_class> 6", "oc -n clusters-<hosted-cluster-name> get pods", "NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s", "oc get --namespace clusters hostedclusters", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available", "hcp create cluster kubevirt --name <hosted-cluster-name> \\ 1 --node-pool-replicas <worker-count> \\ 2 --pull-secret <path-to-pull-secret> \\ 3 --memory <value-for-memory> \\ 4 --cores <value-for-cpu> \\ 5 --infra-namespace=<hosted-cluster-namespace>-<hosted-cluster-name> \\ 6 --infra-kubeconfig-file=<path-to-external-infra-kubeconfig> 7", "*.apps.mgmt-cluster.example.com", "*.apps.guest.apps.mgmt-cluster.example.com", "oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ \"op\": \"add\", \"path\": \"/spec/routeAdmission\", \"value\": {wildcardPolicy: \"WildcardsAllowed\"}}]'", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <value_for_memory> \\ 4 --cores <value_for_cpu> \\ 5 --base-domain <basedomain> 6", "oc get --namespace clusters hostedclusters", "NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available", "hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig", "oc --kubeconfig <hosted_cluster_name>-kubeconfig get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get \"https://console-openshift-console.apps.example.hypershift.lab\": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The \"default\" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)", "oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}'", "oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}'", "apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer", "oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}'", "192.168.20.30", "*.apps.<hosted_cluster_name\\>.<base_domain\\>.", "dig +short test.apps.example.hypershift.lab 192.168.20.30", "oc get --namespace clusters hostedclusters", "NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system", "oc apply -f configure-metallb.yaml", "metallb.metallb.io/metallb created", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: metallb namespace: metallb-system spec: addresses: - 192.168.216.32-192.168.216.122 1", "oc apply -f create-ip-address-pool.yaml", "ipaddresspool.metallb.io/metallb created", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - metallb", "oc apply -f l2advertisement.yaml", "l2advertisement.metallb.io/metallb created", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --additional-network name:<namespace/name> \\ 6 --additional-network name:<namespace/name>", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --attach-default-network false \\ 6 --additional-network name:<namespace>/<network_name> 7", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --qos-class Guaranteed 6", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --vm-node-selector <label_key>=<label_value>,<label_key>=<label_value> 6", "NODEPOOL_NAME=USD{CLUSTER_NAME}-work NODEPOOL_REPLICAS=5 oc scale nodepool/USDNODEPOOL_NAME --namespace clusters --replicas=USDNODEPOOL_REPLICAS", "oc --kubeconfig USDCLUSTER_NAME-kubeconfig get nodes", "NAME STATUS ROLES AGE VERSION example-9jvnf Ready worker 97s v1.27.4+18eadca example-n6prw Ready worker 116m v1.27.4+18eadca example-nc6g4 Ready worker 117m v1.27.4+18eadca example-thp29 Ready worker 4m17s v1.27.4+18eadca example-twxns Ready worker 88s v1.27.4+18eadca", "export NODEPOOL_NAME=USD{CLUSTER_NAME}-extra-cpu export WORKER_COUNT=\"2\" export MEM=\"6Gi\" export CPU=\"4\" export DISK=\"16\" hcp create nodepool kubevirt --cluster-name USDCLUSTER_NAME --name USDNODEPOOL_NAME --node-count USDWORKER_COUNT --memory USDMEM --cores USDCPU --root-volume-size USDDISK", "oc get nodepools --namespace clusters", "NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available", "oc --kubeconfig USDCLUSTER_NAME-kubeconfig get nodes", "NAME STATUS ROLES AGE VERSION example-9jvnf Ready worker 97s v1.27.4+18eadca example-n6prw Ready worker 116m v1.27.4+18eadca example-nc6g4 Ready worker 117m v1.27.4+18eadca example-thp29 Ready worker 4m17s v1.27.4+18eadca example-twxns Ready worker 88s v1.27.4+18eadca example-extra-cpu-zh9l5 Ready worker 2m6s v1.27.4+18eadca example-extra-cpu-zr8mj Ready worker 102s v1.27.4+18eadca", "oc get nodepools --namespace clusters", "NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 2 False False <4.x.0>", "oc get --namespace clusters hostedclusters <hosted_cluster_name>", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example 4.12.2 example-admin-kubeconfig Completed True False The hosted control plane is available", "hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig", "oc get co --kubeconfig=<hosted_cluster_name>-kubeconfig", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.12.2 True False False 2m38s csi-snapshot-controller 4.12.2 True False False 4m3s dns 4.12.2 True False False 2m52s image-registry 4.12.2 True False False 2m8s ingress 4.12.2 True False False 22m kube-apiserver 4.12.2 True False False 23m kube-controller-manager 4.12.2 True False False 23m kube-scheduler 4.12.2 True False False 23m kube-storage-version-migrator 4.12.2 True False False 4m52s monitoring 4.12.2 True False False 69s network 4.12.2 True False False 4m3s node-tuning 4.12.2 True False False 2m22s openshift-apiserver 4.12.2 True False False 23m openshift-controller-manager 4.12.2 True False False 23m openshift-samples 4.12.2 True False False 2m15s operator-lifecycle-manager 4.12.2 True False False 22m operator-lifecycle-manager-catalog 4.12.2 True False False 23m operator-lifecycle-manager-packageserver 4.12.2 True False False 23m service-ca 4.12.2 True False False 4m41s storage 4.12.2 True False False 4m43s", "oc get managedclusters local-cluster", "api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23", "api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10", "host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]", "oc create ns <hosted_cluster_namespace>-<hosted_cluster_name> 1", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \\ 5 --etcd-storage-class=<etcd_storage_class> \\ 6 --ssh-key <path_to_ssh_key> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --control-plane-availability-policy HighlyAvailable \\ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release> \\ 10 --node-pool-replicas <node_pool_replica_count> 11", "oc -n <hosted_control_plane_namespace> get pods", "NAME READY STATUS RESTARTS AGE catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s control-plane-operator-f6b4c8465-4k5dh 1/1 Running 0 4m32s", "- mirrors: - brew.registry.redhat.io source: registry.redhat.io - mirrors: - brew.registry.redhat.io source: registry.stage.redhat.io - mirrors: - brew.registry.redhat.io source: registry-proxy.engineering.redhat.com", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \\ 5 --image-content-sources icsp.yaml \\ 6 --ssh-key <path_to_ssh_key> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> 9", "oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>", "oc get co --kubeconfig=kubeconfig-<hosted_cluster_name>", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s csi-snapshot-controller 4.10.26 True False False 4m3s dns 4.10.26 True False False 2m52s", "oc get pods -A --kubeconfig=kubeconfig-<hosted_cluster_name>", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system konnectivity-agent-khlqv 0/1 Running 0 3m52s openshift-cluster-samples-operator cluster-samples-operator-6b5bcb9dff-kpnbc 2/2 Running 0 20m openshift-monitoring alertmanager-main-0 6/6 Running 0 100s openshift-monitoring openshift-state-metrics-677b9fb74f-qqp6g 3/3 Running 0 104s", "oc get managedclusters local-cluster", "cat /var/named/<example.krnl.es.zone>", "TTL 900 @ IN SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. ( 2019062002 1D 1H 1W 3H ) IN NS bastion.example.krnl.es.com. ; ; api IN A 1xx.2x.2xx.1xx 1 api-int IN A 1xx.2x.2xx.1xx ; ; *.apps IN A 1xx.2x.2xx.1xx ; ;EOF", "compute-0 IN A 1xx.2x.2xx.1yy compute-1 IN A 1xx.2x.2xx.1yy", "oc create ns <hosted_cluster_namespace>-<hosted_cluster_name>", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \\ 5 --etcd-storage-class=<etcd_storage_class> \\ 6 --ssh-key <path_to_ssh_public_key> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --control-plane-availability-policy HighlyAvailable \\ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 10 --node-pool-replicas <node_pool_replica_count> 11", "oc -n <hosted_control_plane_namespace> get pods", "NAME READY STATUS RESTARTS AGE capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted_control_plane_namespace> spec: cpuArchitecture: s390x pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key>", "oc apply -f infraenv-config.yaml", "oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json", "virt-install --name \"<vm_name>\" \\ 1 --autostart --ram=16384 --cpu host --vcpus=4 --location \"<path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img\" \\ 2 --disk <qcow_image_path> \\ 3 --network network:macvtap-net,mac=<mac_address> \\ 4 --graphics none --noautoconsole --wait=-1 --extra-args \"rd.neednet=1 nameserver=<nameserver> coreos.live.rootfs_url=http://<http_server>/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8\" 5", "virt-install --name \"<vm_name>\" \\ 1 --autostart --memory=16384 --cpu host --vcpus=4 --network network:macvtap-net,mac=<mac_address> \\ 2 --cdrom \"<path_to_image.iso>\" \\ 3 --disk <qcow_image_path> --graphics none --noautoconsole --os-variant <os_version> \\ 4 --wait=-1", "rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \\ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \\ 4 random.trust_cpu=on rd.luks.options=discard", "curl -k -L -o generic.ins \"< url for ins-file >\"", "https://.../boot-artifacts/ins-file?arch=s390x&version=4.17.0", "https://..../s390x-initrd-addrsize?api_key=<api-key>&arch=s390x&version=4.17.0", "rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \\ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \\ 4", "vmur pun -r -u -N kernel.img USDINSTALLERKERNELLOCATION/<image name>", "vmur pun -r -u -N generic.parm USDPARMFILELOCATION/paramfilename", "vmur pun -r -u -N initrd.img USDINSTALLERINITRAMFSLOCATION/<image name>", "cp ipl c", "oc -n <hosted_control_plane_namespace> get agents", "NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a auto-assign", "oc -n <hosted_control_plane_namespace> patch agent 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d -p '{\"spec\":{\"installation_disk_id\":\"/dev/sda\",\"approved\":true,\"hostname\":\"worker-zvm-0.hostedn.example.com\"}}' \\ 1 --type merge", "oc -n <hosted_control_plane_namespace> get agents", "NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign", "oc -n <clusters_namespace> scale nodepool <nodepool_name> --replicas 2", "oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\\.openshift\\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{\"\\n\"}{end}'", "BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient", "oc -n <hosted_control_plane_namespace> get agent", "NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign", "hcp create kubeconfig --namespace <clusters_namespace> --name <hosted_cluster_namespace> > <hosted_cluster_name>.kubeconfig", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "NAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f", "oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0-ec.2 True False 40h Cluster version is 4.15.0-ec.2", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators", "oc get managedclusters local-cluster", "cat /var/named/<example.krnl.es.zone>", "TTL 900 @ IN SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. ( 2019062002 1D 1H 1W 3H ) IN NS bastion.example.krnl.es.com. ; ; api IN A 1xx.2x.2xx.1xx 1 api-int IN A 1xx.2x.2xx.1xx ; ; *.apps.<hosted-cluster-name>.<basedomain> IN A 1xx.2x.2xx.1xx ; ;EOF", "compute-0 IN A 1xx.2x.2xx.1yy compute-1 IN A 1xx.2x.2xx.1yy", "oc create ns <hosted_cluster_namespace>-<hosted_cluster_name>", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \\ 5 --etcd-storage-class=<etcd_storage_class> \\ 6 --ssh-key <path_to_ssh_public_key> \\ 7 --namespace <hosted_cluster_namespace> \\ 8 --control-plane-availability-policy HighlyAvailable \\ 9 --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 10 --node-pool-replicas <node_pool_replica_count> 11", "oc -n <hosted_control_plane_namespace> get pods", "NAME READY STATUS RESTARTS AGE capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/deploying-hosted-control-planes
5.9. Port Forwarding
5.9. Port Forwarding Using firewalld , you can set up ports redirection so that any incoming traffic that reaches a certain port on your system is delivered to another internal port of your choice or to an external port on another machine. 5.9.1. Adding a Port to Redirect Before you redirect traffic from one port to another port, or another address, you need to know three things: which port the packets arrive at, what protocol is used, and where you want to redirect them. To redirect a port to another port: To redirect a port to another port at a different IP address: Add the port to be forwarded: Enable masquerade: Example 5.1. Redirecting TCP Port 80 to Port 88 on the Same Machine To redirect the port: Redirect the port 80 to port 88 for TCP traffic: Make the new settings persistent: Check that the port is redirected: 5.9.2. Removing a Redirected Port To remove a redirected port: To remove a forwarded port redirected to a different address: Remove the forwarded port: Disable masquerade: Note Redirecting ports using this method only works for IPv4-based traffic. For IPv6 redirecting setup, you need to use rich rules. For more information, see Section 5.15, "Configuring Complex Firewall Rules with the "Rich Language" Syntax" . To redirect to an external system, it is necessary to enable masquerading. For more information, see Section 5.10, "Configuring IP Address Masquerading" . Example 5.2. Removing TCP Port 80 forwarded to Port 88 on the Same Machine To remove the port redirection: List redirected ports: Remove the redirected port from the firewall:: Make the new settings persistent:
[ "~]# firewall-cmd --add-forward-port=port= port-number :proto= tcp|udp|sctp|dccp :toport= port-number", "~]# firewall-cmd --add-forward-port=port= port-number :proto= tcp|udp :toport= port-number :toaddr= IP", "~]# firewall-cmd --add-masquerade", "~]# firewall-cmd --add-forward-port=port=80:proto=tcp:toport=88", "~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --list-all", "~]# firewall-cmd --remove-forward-port=port= port-number :proto=<tcp|udp>:toport= port-number :toaddr=<IP>", "~]# firewall-cmd --remove-forward-port=port= port-number :proto=<tcp|udp>:toport= port-number :toaddr=<IP>", "~]# firewall-cmd --remove-masquerade", "~]# firewall-cmd --list-forward-ports port=80:proto=tcp:toport=88:toaddr=", "~]# firewall-cmd --remove-forward-port=port=80:proto=tcp:toport=88:toaddr=", "~]# firewall-cmd --runtime-to-permanent" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-port_forwarding
Chapter 2. Security enhancements
Chapter 2. Security enhancements The following sections provide some suggestions to harden the security of your overcloud. 2.1. Using secure root user access The overcloud image automatically contains hardened security for the root user. For example, each deployed overcloud node automatically disables direct SSH access to the root user. You can still access the root user on overcloud nodes. Each overcloud node has a tripleo-admin user account. This user account contains the undercloud public SSH key, which provides SSH access without a password from the undercloud to the overcloud node. Prerequisites You have an installed Red Hat OpenStack Platform director environment. You are logged into the director as stack. Procedure On the undercloud node, log in to the an overcloud node through SSH as the tripleo-admin user. Switch to the root user with sudo -i . 2.2. Rotating service account passwords You can rotate service passwords for security or compliance purposes. Use the rotate-passwords.yaml ansible playbook to complete this task. Procedure Create a back up of your current passwords file: Run the rotate-passwords.yaml ansible playbook: Example output 2.3. Adding services to the overcloud firewall Additional resources When you deploy Red Hat OpenStack Platform, each core service is deployed with a default set of firewall rules on each overcloud node. You can use the ExtraFirewallRules parameter to create rules to open ports for additional services, or create rules to restrict services. Each rule name becomes the comment for the respective iptables rule. Each rule name starts with a three-digit prefix to help Puppet order the rules in the final iptables file. The default Red Hat OpenStack Platform rules use prefixes in the 000 to 200 range. When you create rules for new services, prefix the name with a three-digit number higher than 200. Procedure Use a string to define each rule name under the ExtraFireWallRules parameter. You can use the following parameters under the rule name to define the rule: dport:: The destination port associated to the rule. proto:: The protocol associated to the rule. Defaults to tcp . action:: The action policy associated to the rule. Defaults to accept . source:: The source IP address associated to the rule. The following example shows how to use rules to open additional ports for custom applications: Note When you do not set the action parameter, the result is accept . You can only set the action parameter to drop , insert , or append . Include the ~/templates/firewall.yaml file in the openstack overcloud deloy command. Include all templates that are necessary for your deployment: 2.4. Removing services from the overcloud firewall You can use rules to restrict services. The number that you use in the rule name determines where in iptables the rule will be inserted. The following procedure shows how to restrict the rabbitmq service to the InternalAPI network. Procedure On a Controller node, find the number of the default iptables rule for rabbitmq : [tripleo-admin@overcloud-controller-2 ~]USD sudo iptables -L | grep rabbitmq ACCEPT tcp -- anywhere anywhere multiport dports vtr-emulator,epmd,amqp,25672,25673:25683 state NEW /* 109 rabbitmq-bundle ipv4 */ In an environment file uder parameter_defaults , use the ExtraFirewallRules parameter to restrict rabbitmq to the InternalApi network. The rule is given a lower number thant the default rabbitmq rule number or 109: Note When you do not set the action parameter, the result is accept . You can only set the action parameter to drop , insert , or append . Include the ~/templates/firewall.yaml file in the openstack overcloud deloy command. Include all templates that are necessary for your deployment: 2.5. Changing the Simple Network Management Protocol (SNMP) strings Director provides a default read-only SNMP configuration for your overcloud. It is advisable to change the SNMP strings to mitigate the risk of unauthorized users learning about your network devices. Note When you configure the ExtraConfig interface with a string parameter, you must use the following syntax to ensure that heat and Hiera do not interpret the string as a Boolean value: '"<VALUE>"' . Set the following hieradata using the ExtraConfig hook in an environment file for your overcloud: SNMP traditional access control settings snmp::ro_community IPv4 read-only SNMP community string. The default value is public . snmp::ro_community6 IPv6 read-only SNMP community string. The default value is public . snmp::ro_network Network that is allowed to RO query the daemon. This value can be a string or an array. Default value is 127.0.0.1 . snmp::ro_network6 Network that is allowed to RO query the daemon with IPv6. This value can be a string or an array. The default value is ::1/128 . tripleo::profile::base::snmp::snmpd_config Array of lines to add to the snmpd.conf file as a safety valve. The default value is [] . See the SNMP Configuration File web page for all available options. For example: This changes the read-only SNMP community string on all nodes. SNMP view-based access control settings (VACM) snmp::com2sec An array of VACM com2sec mappings. Must provide SECNAME, SOURCE and COMMUNITY. snmp::com2sec6 An array of VACM com2sec6 mappings. Must provide SECNAME, SOURCE and COMMUNITY. For example: This changes the read-only SNMP community string on all nodes. For more information, see the snmpd.conf man page. 2.6. Using the Open vSwitch firewall You can configure security groups to use the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director. Use the NeutronOVSFirewallDriver parameter to specify firewall driver that you want to use: iptables_hybrid - Configures the Networking service (neutron) to use the iptables/hybrid based implementation. openvswitch - Configures the Networking service to use the OVS firewall flow-based driver. The openvswitch firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network. Important Multicast traffic is handled differently by the Open vSwitch (OVS) firewall driver than by the iptables firewall driver. With iptables, by default, VRRP traffic is denied, and you must enable VRRP in the security group rules for any VRRP traffic to reach an endpoint. With OVS, all ports share the same OpenFlow context, and multicast traffic cannot be processed individually per port. Because security groups do not apply to all ports (for example, the ports on a router), OVS uses the NORMAL action and forwards multicast traffic to all ports as specified by RFC 4541. Note The iptables_hybrid option is not compatible with OVS-DPDK. The openvswitch option is not compatible with OVS Hardware Offload. Configure the NeutronOVSFirewallDriver parameter in the network-environment.yaml file: NeutronOVSFirewallDriver: openvswitch NeutronOVSFirewallDriver : Configures the name of the firewall driver that you want to use when you implement security groups. Possible values depend on your system configuration. Some examples are noop , openvswitch , and iptables_hybrid . The default value of an empty string results in a supported configuration.
[ "cp overcloud-deploy/overcloud/overcloud-passwords.yaml overcloud-deploy/overcloud/overcloud-passwords.yaml.old", "ansible-playbook -i ./tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml /usr/share/ansible/tripleo-playbooks/rotate-passwords.yaml", "[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details PLAY [Rotate passwords] ************************************************************************************************************************************************************************************ TASK [Set passwords environment file path] ***************************************************************************************************************************************************************** ok: [undercloud-0] TASK [Rotate passwords] ************************************************************************************************************************************************************************************ changed: [undercloud-0] TASK [Create rotated password parameter fact] ************************************************************************************************************************************************************** ok: [undercloud-0] TASK [Update existing password environment file] *********************************************************************************************************************************************************** changed: [undercloud-0] PLAY RECAP ************************************************************************************************************************************************************************************************* undercloud-0 : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "cat > ~/templates/firewall.yaml <<EOF parameter_defaults: ExtraFirewallRules: '300 allow custom application 1': dport: 999 proto: udp '301 allow custom application 2': dport: 8081 proto: tcp EOF", "openstack overcloud deploy --templates / -e /home/stack/templates/firewall.yaml / .", "[tripleo-admin@overcloud-controller-2 ~]USD sudo iptables -L | grep rabbitmq ACCEPT tcp -- anywhere anywhere multiport dports vtr-emulator,epmd,amqp,25672,25673:25683 state NEW /* 109 rabbitmq-bundle ipv4 */", "cat > ~/templates/firewall.yaml <<EOF parameter_defaults: ExtraFirewallRules: '098 allow rabbit from internalapi network': dport: - 4369 - 5672 - 25672 proto: tcp source: 10.0.0.0/24 '099 drop other rabbit access': dport: - 4369 - 5672 - 25672 proto: tcp action: drop EOF", "openstack overcloud deploy --templates / -e /home/stack/templates/firewall.yaml / .", "parameter_defaults: ExtraConfig: snmp::ro_community: mysecurestring snmp::ro_community6: myv6securestring", "parameter_defaults: ExtraConfig: snmp::com2sec: [\"notConfigUser default mysecurestring\"] snmp::com2sec6: [\"notConfigUser default myv6securestring\"]", "NeutronOVSFirewallDriver: openvswitch" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_security-enhancements_security_and_hardening
Deploying Red Hat Satellite on Amazon Web Services
Deploying Red Hat Satellite on Amazon Web Services Red Hat Satellite 6.15 Deploy Satellite Server and Capsule on Amazon Web Services Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/deploying_red_hat_satellite_on_amazon_web_services/index
Chapter 98. DockerOutput schema reference
Chapter 98. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Property type Description image string The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. pushSecret string Container Registry Secret with the credentials for pushing the newly built image. additionalKanikoOptions string array Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --custom-platform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run, --registry-certificate, --registry-client-cert. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. type string Must be docker .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-DockerOutput-reference
4.355. xorg-x11-drv-nouveau
4.355. xorg-x11-drv-nouveau 4.355.1. RHBA-2011:1600 - xorg-x11-drv-nouveau bug fix and enhancement update Updated xorg-x11-drv-nouveau packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-nouveau utility provides the Xorg X11 Nouveau video driver for NVIDIA graphics chipsets. Bug Fix BZ# 708500 Prior to this update, one process was used to scan for all defects. As a result, xorg-x11-drv-nouveau packages did not build without patches against its supporting components. This update scans defects in downstream patches separately. Now, the packages build as expected when not all downstream patches are present. Enhancement BZ# 713768 This update adds the updated Xorg Nouveau driver for NVIDIA GeForce/Quadro hardware to the xorg-x11-drv-nouveau package. All users of the Xorg x11 Nouveau driver, are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/xorg-x11-drv-nouveau
Chapter 80. token
Chapter 80. token This chapter describes the commands under the token command. 80.1. token issue Issue new token Usage: Table 80.1. Optional Arguments Value Summary -h, --help Show this help message and exit Table 80.2. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 80.3. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 80.4. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.2. token revoke Revoke existing token Usage: Table 80.6. Positional Arguments Value Summary <token> Token to be deleted Table 80.7. Optional Arguments Value Summary -h, --help Show this help message and exit
[ "openstack token issue [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]", "openstack token revoke [-h] <token>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/token
4.7. Kernel
4.7. Kernel Kernel Media support The following features are presented as Technology Previews: The latest upstream video4linux Digital video broadcasting Primarily infrared remote control device support Various webcam support fixes and improvements Package: kernel-2.6.32-431 Linux (NameSpace) Container [LXC] Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6 provides application level containers to separate and control the application resource usage policies via cgroups and namespaces. This release includes basic management of container life-cycle by allowing creation, editing and deletion of containers via the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Packages: libvirt-0.9.10-21 , virt-manager-0.9.0-14 Diagnostic pulse for the fence_ipmilan agent, BZ# 655764 A diagnostic pulse can now be issued on the IPMI interface using the fence_ipmilan agent. This new Technology Preview is used to force a kernel dump of a host if the host is configured to do so. Note that this feature is not a substitute for the off operation in a production cluster. Package: fence-agents-3.1.5-35
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/kernel_tp
Chapter 8. Configuring the OpenTelemetry Collector metrics
Chapter 8. Configuring the OpenTelemetry Collector metrics The following list shows some of these metrics: Collector memory usage CPU utilization Number of active traces and spans processed Dropped spans, logs, or metrics Exporter and receiver statistics The Red Hat build of OpenTelemetry Operator automatically creates a service named <instance_name>-collector-monitoring that exposes the Collector's internal metrics. This service listens on port 8888 by default. You can use these metrics for monitoring the Collector's performance, resource consumption, and other internal behaviors. You can also use a Prometheus instance or another monitoring tool to scrape these metrics from the mentioned <instance_name>-collector-monitoring service. Note When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true , the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics. Prerequisites Monitoring for user-defined projects is enabled in the cluster. Procedure To enable metrics of an OpenTelemetry Collector instance, set the spec.observability.metrics.enableMetrics field to true : apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets . Filter by Source: User . Check that the ServiceMonitors or PodMonitors in the opentelemetry-collector-<instance_name> format have the Up status. Additional resources Enabling monitoring for user-defined projects
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/otel-configuring-metrics
Chapter 1. Overview
Chapter 1. Overview Release notes include links to the original tickets. Private tickets have no links and instead feature the following footnote [1] . 1.1. Advisories You can view all advisories , including security and bug fixes, for major and minor versions of this release on the Red Hat Customer Portal. 1.2. Major changes in 6.16 Web UI OpenSCAP compliance remediation wizard ( SAT-23240 ). Extended All Hosts page and redesigned Job details page, as Technology Previews ( SAT-20041 , SAT-18427 ). Installation and upgrade Upgrading to RHEL 9 is documented in See Upgrading Red Hat Enterprise Linux on Satellite or Capsule in Upgrading connected Red Hat Satellite to 6.16 . satellite-maintain update command for minor releases ( SAT-21970 ). Puppet server 8 support ( SAT-24140 ). PostgreSQL 13 support ( SAT-23369 , SAT-24414 ). Online backup replaces snapshot backup ( SAT-20955 ). Capsule port 8443 disabled by default ( SAT-24522 ). Content management Simple Content Access replaces entitlement-based subscription management ( SAT-27936 ). Hammer command repairs corrupted Capsule content ( SAT-16330 ). Container management improvements ( SAT-20280 , SAT-23852 ). Host provisioning and management Kickstart provisioning template improvements ( SAT-23053 , SAT-23034 ). Provisioning templates update self-signed CA certificates ( SAT-18615 ). Job templates run remote scripts ( SAT-18615 ). VMware support improvements ( SAT-21075 , SAT-23052 ). foreman_webhooks plugin replaces the foreman_hooks plugin ( SAT-16036 ). Telemetry disablement in Convert2RHEL job templates removed ( SAT-24654 ). Security compliance Open Vulnerability and Assessment Language support, 6.15 Technology Preview, removed ( SAT-23806 ). Documentation "Configuring external authentication" in Installing Satellite Server in a connected network environment moved to a new guide, Configuring authentication for Red Hat Satellite users . 1.3. Red Hat Satellite Red Hat Satellite is a system management solution that enables you to deploy, configure, and maintain your systems across physical, virtual, and cloud environments. Red Hat Satellite provides provisioning, remote management and monitoring of multiple Red Hat Enterprise Linux deployments with a single, centralized tool. Red Hat Satellite Server synchronizes content from the Red Hat Customer Portal and other sources. It provides detailed lifecycle management, user and group role-based access control, integrated subscription management, and advanced GUI, CLI, and API access. Red Hat Satellite Capsule Server mirrors content from the Red Hat Satellite Server and distributes it to different geographical locations. Host systems pull content and configurations from the Capsule Server in their location instead of the central Satellite Server. The Capsule Server also provides localized services such as Puppet server, DHCP, DNS, or TFTP, assisting in scaling Red Hat Satellite as the number of managed systems in your environment grows. 1.4. Red Hat Customer Portal Labs Red Hat Customer Portal Labs provide applications to improve performance, troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. The following applications are available for Red Hat Satellite: Red Hat Satellite Upgrade Helper 1.5. Additional resources Red Hat Satellite product life cycle describes the time period for each major version and its level of maintenance. Satellite 6 component versions describes upstream core components, Foreman plugins, and integrated projects. Overview, concepts, and deployment considerations describes Red Hat Satellite concepts, components, tools, and deployment planning. Supported client architectures in Overview, concepts, and deployment considerations describes supported client architectures for content management, host provisioning, and configuration management. [1] This ticket does not have a link because it is private.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/release_notes/assembly_overview
Managing Single Sign-On and Smart Cards
Managing Single Sign-On and Smart Cards Red Hat Enterprise Linux 6 On Using the Enterprise Security Client Aneta Steflova Petrova Red Hat Customer Content Services [email protected] Tomas Capek Red Hat Customer Content Services Ella Deon Ballard Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/index
23.2. Userspace Access
23.2. Userspace Access Always take care to use properly aligned and sized I/O. This is especially important for Direct I/O access. Direct I/O should be aligned on a logical_block_size boundary, and in multiples of the logical_block_size . With native 4K devices (i.e. logical_block_size is 4K) it is now critical that applications perform direct I/O in multiples of the device's logical_block_size . This means that applications will fail with native 4k devices that perform 512-byte aligned I/O rather than 4k-aligned I/O. To avoid this, an application should consult the I/O parameters of a device to ensure it is using the proper I/O alignment and size. As mentioned earlier, I/O parameters are exposed through the both sysfs and block device ioctl interfaces. For more information, see man libblkid . This man page is provided by the libblkid-devel package. sysfs Interface /sys/block/ disk /alignment_offset or /sys/block/ disk / partition /alignment_offset Note The file location depends on whether the disk is a physical disk (be that a local disk, local RAID, or a multipath LUN) or a virtual disk. The first file location is applicable to physical disks while the second file location is applicable to virtual disks. The reason for this is because virtio-blk will always report an alignment value for the partition. Physical disks may or may not report an alignment value. /sys/block/ disk /queue/physical_block_size /sys/block/ disk /queue/logical_block_size /sys/block/ disk /queue/minimum_io_size /sys/block/ disk /queue/optimal_io_size The kernel will still export these sysfs attributes for "legacy" devices that do not provide I/O parameters information, for example: Example 23.1. sysfs Interface Block Device ioctls BLKALIGNOFF : alignment_offset BLKPBSZGET : physical_block_size BLKSSZGET : logical_block_size BLKIOMIN : minimum_io_size BLKIOOPT : optimal_io_size
[ "alignment_offset: 0 physical_block_size: 512 logical_block_size: 512 minimum_io_size: 512 optimal_io_size: 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/iolimuserspace
Chapter 6. Integrating with QRadar
Chapter 6. Integrating with QRadar You can configure Red Hat Advanced Cluster Security for Kubernetes to send events to QRadar by configuring a generic webhook integration in RHACS. The following steps represent a high-level workflow for integrating RHACS with QRadar: In RHACS: Configure the generic webhook. Note When configuring the integration in RHACS, in the Endpoint field, use the following example as a guide: <URL to QRadar Box>:<Port of Integration> . Identify policies for which you want to send notifications, and update the notification settings for those policies. If QRadar does not automatically detect the log source, add an RHACS log source on the QRadar Console. For more information on configuring QRadar and RHACS, see the Red Hat Advanced Cluster Security for Kubernetes IBM resource. 6.1. Configuring integrations by using webhooks Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the webhook URL. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Generic Webhook . Click New integration . Enter a name for Integration name . Enter the webhook URL in the Endpoint field. If your webhook receiver uses an untrusted certificate, enter a CA certificate in the CA certificate field. Otherwise, leave it blank. Note The server certificate used by the webhook receiver must be valid for the endpoint DNS name. You can click Skip TLS verification to ignore this validation. Red Hat does not suggest turning off TLS verification. Without TLS verification, data could be intercepted by an unintended recipient. Optional: Click Enable audit logging to receive alerts about all the changes made in Red Hat Advanced Cluster Security for Kubernetes. Note Red Hat suggests using separate webhooks for alerts and audit logs to handle these messages differently. To authenticate with the webhook receiver, enter details for one of the following: Username and Password for basic HTTP authentication Custom Header , for example: Authorization: Bearer <access_token> Use Extra fields to include additional key-value pairs in the JSON object that Red Hat Advanced Cluster Security for Kubernetes sends. For example, if your webhook receiver accepts objects from multiple sources, you can add "source": "rhacs" as an extra field and filter on this value to identify all alerts from Red Hat Advanced Cluster Security for Kubernetes. Select Test to send a test message to verify that the integration with your generic webhook is working. Select Save to create the configuration. 6.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the webhook notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-qradar
14.3. Booleans
14.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: smbd_anon_write Having this Boolean enabled allows smbd to write to a public directory, such as an area reserved for common files that otherwise has no special access restrictions. samba_create_home_dirs Having this Boolean enabled allows Samba to create new home directories independently. This is often done by mechanisms such as PAM. samba_domain_controller When enabled, this Boolean allows Samba to act as a domain controller, as well as giving it permission to execute related commands such as useradd , groupadd , and passwd . samba_enable_home_dirs Enabling this Boolean allows Samba to share users' home directories. samba_export_all_ro Export any file or directory, allowing read-only permissions. This allows files and directories that are not labeled with the samba_share_t type to be shared through Samba. When the samba_export_all_ro Boolean is enabled, but the samba_export_all_rw Boolean is disabled, write access to Samba shares is denied, even if write access is configured in /etc/samba/smb.conf , as well as Linux permissions allowing write access. samba_export_all_rw Export any file or directory, allowing read and write permissions. This allows files and directories that are not labeled with the samba_share_t type to be exported through Samba. Permissions in /etc/samba/smb.conf and Linux permissions must be configured to allow write access. samba_run_unconfined Having this Boolean enabled allows Samba to run unconfined scripts in the /var/lib/samba/scripts/ directory. samba_share_fusefs This Boolean must be enabled for Samba to share fusefs file systems. samba_share_nfs Disabling this Boolean prevents smbd from having full access to NFS shares through Samba. Enabling this Boolean will allow Samba to share NFS volumes. use_samba_home_dirs Enable this Boolean to use a remote server for Samba home directories. virt_use_samba Allow virtual machine access to CIFS files. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
[ "~]USD getsebool -a | grep service_name", "~]USD sepolicy booleans -b boolean_name" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-samba-booleans
Appendix A. An Introduction to Disk Partitions
Appendix A. An Introduction to Disk Partitions Note This appendix is not necessarily applicable to non-x86-based architectures. However, the general concepts mentioned here may apply. This appendix is not necessarily applicable to non-x86-based architectures. However, the general concepts mentioned here may apply. If you are reasonably comfortable with disk partitions, you could skip ahead to Section A.1.5, "Making Room For Red Hat Enterprise Linux" , for more information on the process of freeing up disk space to prepare for a Red Hat Enterprise Linux installation. This section also discusses the partition naming scheme used by Linux systems, sharing disk space with other operating systems, and related topics. A.1. Hard Disk Basic Concepts Hard disks perform a very simple function - they store data and reliably retrieve it on command. When discussing issues such as disk partitioning, it is important to know a bit about the underlying hardware. Unfortunately, it is easy to become bogged down in details. Therefore, this appendix uses a simplified diagram of a disk drive to help explain what is really happening when a disk drive is partitioned. Figure A.1, "An Unused Disk Drive" , shows a brand-new, unused disk drive. Figure A.1. An Unused Disk Drive Not much to look at, is it? But if we are talking about disk drives on a basic level, it is adequate. Say that we would like to store some data on this drive. As things stand now, it will not work. There is something we need to do first. A.1.1. It is Not What You Write, it is How You Write It Experienced computer users probably got this one on the first try. We need to format the drive. Formatting (usually known as "making a file system ") writes information to the drive, creating order out of the empty space in an unformatted drive. Figure A.2. Disk Drive with a File System As Figure A.2, "Disk Drive with a File System" , implies, the order imposed by a file system involves some trade-offs: A small percentage of the drive's available space is used to store file system-related data and can be considered as overhead. A file system splits the remaining space into small, consistently-sized segments. For Linux, these segments are known as blocks . [15] Given that file systems make things like directories and files possible, these trade-offs are usually seen as a small price to pay. It is also worth noting that there is no single, universal file system. As Figure A.3, "Disk Drive with a Different File System" , shows, a disk drive may have one of many different file systems written on it. As you might guess, different file systems tend to be incompatible; that is, an operating system that supports one file system (or a handful of related file system types) may not support another. This last statement is not a hard-and-fast rule, however. For example, Red Hat Enterprise Linux supports a wide variety of file systems (including many commonly used by other operating systems), making data interchange between different file systems easy. Figure A.3. Disk Drive with a Different File System Of course, writing a file system to disk is only the beginning. The goal of this process is to actually store and retrieve data. Let us take a look at our drive after some files have been written to it. Figure A.4. Disk Drive with Data Written to It As Figure A.4, "Disk Drive with Data Written to It" , shows, some of the previously-empty blocks are now holding data. However, by just looking at this picture, we cannot determine exactly how many files reside on this drive. There may only be one file or many, as all files use at least one block and some files use multiple blocks. Another important point to note is that the used blocks do not have to form a contiguous region; used and unused blocks may be interspersed. This is known as fragmentation . Fragmentation can play a part when attempting to resize an existing partition. As with most computer-related technologies, disk drives changed over time after their introduction. In particular, they got bigger. Not larger in physical size, but bigger in their capacity to store information. And, this additional capacity drove a fundamental change in the way disk drives were used. A.1.2. Partitions: Turning One Drive Into Many Disk drives can be divided into partitions . Each partition can be accessed as if it was a separate disk. This is done through the addition of a partition table . There are several reasons for allocating disk space into separate disk partitions, for example: Logical separation of the operating system data from the user data Ability to use different file systems Ability to run multiple operating systems on one machine There are currently two partitioning layout standards for physical hard disks: Master Boot Record ( MBR ) and GUID Partition Table ( GPT ). MBR is an older method of disk partitioning used with BIOS-based computers. GPT is a newer partitioning layout that is a part of the Unified Extensible Firmware Interface ( UEFI ). This section and Section A.1.3, "Partitions Within Partitions - An Overview of Extended Partitions" mainly describe the Master Boot Record ( MBR ) disk partitioning scheme. For information about the GUID Partition Table ( GPT ) partitioning layout, see Section A.1.4, "GUID Partition Table (GPT)" . Note While the diagrams in this chapter show the partition table as being separate from the actual disk drive, this is not entirely accurate. In reality, the partition table is stored at the very start of the disk, before any file system or user data. But for clarity, they are separate in our diagrams. Figure A.5. Disk Drive with Partition Table As Figure A.5, "Disk Drive with Partition Table" shows, the partition table is divided into four sections or four primary partitions. A primary partition is a partition on a hard drive that can contain only one logical drive (or section). Each section can hold the information necessary to define a single partition, meaning that the partition table can define no more than four partitions. Each partition table entry contains several important characteristics of the partition: The points on the disk where the partition starts and ends Whether the partition is "active" The partition's type Let us take a closer look at each of these characteristics. The starting and ending points actually define the partition's size and location on the disk. The "active" flag is used by some operating systems' boot loaders. In other words, the operating system in the partition that is marked "active" is booted. The partition's type can be a bit confusing. The type is a number that identifies the partition's anticipated usage. If that statement sounds a bit vague, that is because the meaning of the partition type is a bit vague. Some operating systems use the partition type to denote a specific file system type, to flag the partition as being associated with a particular operating system, to indicate that the partition contains a bootable operating system, or some combination of the three. By this point, you might be wondering how all this additional complexity is normally used. Refer to Figure A.6, "Disk Drive With Single Partition" , for an example. Figure A.6. Disk Drive With Single Partition In many cases, there is only a single partition spanning the entire disk, essentially duplicating the method used before partitions. The partition table has only one entry used, and it points to the start of the partition. We have labeled this partition as being of the "DOS" type. Although it is only one of several possible partition types listed in Table A.1, "Partition Types" , it is adequate for the purposes of this discussion. Table A.1, "Partition Types" , contains a listing of some popular (and obscure) partition types, along with their hexadecimal numeric values. Table A.1. Partition Types Partition Type Value Partition Type Value Empty 00 Novell Netware 386 65 DOS 12-bit FAT 01 PIC/IX 75 XENIX root 02 Old MINIX 80 XENIX usr 03 Linux/MINUX 81 DOS 16-bit <=32M 04 Linux swap 82 Extended 05 Linux native 83 DOS 16-bit >=32 06 Linux extended 85 OS/2 HPFS 07 Amoeba 93 AIX 08 Amoeba BBT 94 AIX bootable 09 BSD/386 a5 OS/2 Boot Manager 0a OpenBSD a6 Win95 FAT32 0b NEXTSTEP a7 Win95 FAT32 (LBA) 0c BSDI fs b7 Win95 FAT16 (LBA) 0e BSDI swap b8 Win95 Extended (LBA) 0f Syrinx c7 Venix 80286 40 CP/M db Novell 51 DOS access e1 PReP Boot 41 DOS R/O e3 GNU HURD 63 DOS secondary f2 Novell Netware 286 64 BBT ff A.1.3. Partitions Within Partitions - An Overview of Extended Partitions Of course, over time it became obvious that four partitions would not be enough. As disk drives continued to grow, it became more and more likely that a person could configure four reasonably-sized partitions and still have disk space left over. There needed to be some way of creating more partitions. Enter the extended partition. As you may have noticed in Table A.1, "Partition Types" , there is an "Extended" partition type. It is this partition type that is at the heart of extended partitions. When a partition is created and its type is set to "Extended," an extended partition table is created. In essence, the extended partition is like a disk drive in its own right - it has a partition table that points to one or more partitions (now called logical partitions , as opposed to the four primary partitions ) contained entirely within the extended partition itself. Figure A.7, "Disk Drive With Extended Partition" , shows a disk drive with one primary partition and one extended partition containing two logical partitions (along with some unpartitioned free space). Figure A.7. Disk Drive With Extended Partition As this figure implies, there is a difference between primary and logical partitions - there can only be four primary partitions, but there is no fixed limit to the number of logical partitions that can exist. However, due to the way in which partitions are accessed in Linux, you should avoid defining more than 12 logical partitions on a single disk drive. Now that we have discussed partitions in general, let us review how to use this knowledge to install Red Hat Enterprise Linux. A.1.4. GUID Partition Table (GPT) GUID Partition Table ( GPT ) is a newer partitioning scheme based on using Globally Unique Identifiers ( GUID ). GPT was developed to cope with limitations of the MBR partition table, especially with the limited maximum addressable storage space of a disk. Unlike MBR , which is unable to address storage space larger than 2.2 terabytes, GPT can be used with hard disks larger than this; the maximum addressable disk size is 2.2 zettabytes. In addition, GPT by default supports creating up to 128 primary partitions. This number could be extended by allocating more space to the partition table. GPT disks use logical block addressing (LBA) and the partition layout is as follows: To preserve backward compatibility with MBR disks, the first sector ( LBA 0) of GPT is reserved for MBR data and it is called " protective MBR " . The primary GPT header begins on the second logical block ( LBA 1) of the device. The header contains the disk GUID, the location of the primary partition table, the location of the secondary GPT header, and CRC32 checksums of itself and the primary partition table. It also specifies the number of partition entries of the table. The primary GPT table includes, by default, 128 partition entries, each with an entry size 128 bytes, its partition type GUID and unique partition GUID. The secondary GPT table is identical to the primary GPT table. It is used mainly as a backup table for recovery in case the primary partition table is corrupted. The secondary GPT header is located on the last logical sector of the disk and it can be used to recover GPT information in case the primary header is corrupted. It contains the disk GUID, the location of the secondary partition table and the primary GPT header, CRC32 checksums of itself and the secondary partition table, and the number of possible partition entries. Important There must be a BIOS boot partition for the boot loader to be installed successfully onto a disk that contains a GPT (GUID Partition Table). This includes disks initialized by Anaconda . If the disk already contains a BIOS boot partition, it can be reused. A.1.5. Making Room For Red Hat Enterprise Linux The following list presents some possible scenarios you may face when attempting to repartition your hard disk: Unpartitioned free space is available An unused partition is available Free space in an actively used partition is available Let us look at each scenario in order. Note Keep in mind that the following illustrations are simplified in the interest of clarity and do not reflect the exact partition layout that you encounter when actually installing Red Hat Enterprise Linux. A.1.5.1. Using Unpartitioned Free Space In this situation, the partitions already defined do not span the entire hard disk, leaving unallocated space that is not part of any defined partition. Figure A.8, "Disk Drive with Unpartitioned Free Space" , shows what this might look like. Figure A.8. Disk Drive with Unpartitioned Free Space In Figure A.8, "Disk Drive with Unpartitioned Free Space" , 1 represents an undefined partition with unallocated space and 2 represents a defined partition with allocated space. If you think about it, an unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. In any case, you can create the necessary partitions from the unused space. Unfortunately, this scenario, although very simple, is not very likely (unless you have just purchased a new disk just for Red Hat Enterprise Linux). Most pre-installed operating systems are configured to take up all available space on a disk drive (refer to Section A.1.5.3, "Using Free Space from an Active Partition" ). , we will discuss a slightly more common situation. A.1.5.2. Using Space from an Unused Partition In this case, maybe you have one or more partitions that you do not use any longer. Perhaps you have dabbled with another operating system in the past, and the partition(s) you dedicated to it never seem to be used anymore. Figure A.9, "Disk Drive With an Unused Partition" , illustrates such a situation. Figure A.9. Disk Drive With an Unused Partition In Figure A.9, "Disk Drive With an Unused Partition" , 1 represents an unused partition and 2 represents reallocating an unused partition for Linux. If you find yourself in this situation, you can use the space allocated to the unused partition. You first must delete the partition and then create the appropriate Linux partition(s) in its place. You can delete the unused partition and manually create new partitions during the installation process. A.1.5.3. Using Free Space from an Active Partition This is the most common situation. It is also, unfortunately, the hardest to handle. The main problem is that, even if you have enough free space, it is presently allocated to a partition that is already in use. If you purchased a computer with pre-installed software, the hard disk most likely has one massive partition holding the operating system and data. Aside from adding a new hard drive to your system, you have two choices: Destructive Repartitioning Basically, you delete the single large partition and create several smaller ones. As you might imagine, any data you had in the original partition is destroyed. This means that making a complete backup is necessary. For your own sake, make two backups, use verification (if available in your backup software), and try to read data from your backup before you delete the partition. Warning If there was an operating system of some type installed on that partition, it needs to be reinstalled as well. Be aware that some computers sold with pre-installed operating systems may not include the CD-ROM media to reinstall the original operating system. The best time to notice if this applies to your system is before you destroy your original partition and its operating system installation. After creating a smaller partition for your existing operating system, you can reinstall any software, restore your data, and start your Red Hat Enterprise Linux installation. Figure A.10, "Disk Drive Being Destructively Repartitioned" shows this being done. Figure A.10. Disk Drive Being Destructively Repartitioned In Figure A.10, "Disk Drive Being Destructively Repartitioned" , 1 represents before and 2 represents after. Warning As Figure A.10, "Disk Drive Being Destructively Repartitioned" , shows, any data present in the original partition is lost without proper backup! Non-Destructive Repartitioning Here, you run a program that does the seemingly impossible: it makes a big partition smaller without losing any of the files stored in that partition. Many people have found this method to be reliable and trouble-free. What software should you use to perform this feat? There are several disk management software products on the market. Do some research to find the one that is best for your situation. While the process of non-destructive repartitioning is rather straightforward, there are a number of steps involved: Compress and backup existing data Resize the existing partition Create new partition(s) we will look at each step in a bit more detail. A.1.5.3.1. Compress existing data As Figure A.11, "Disk Drive Being Compressed" , shows, the first step is to compress the data in your existing partition. The reason for doing this is to rearrange the data such that it maximizes the available free space at the "end" of the partition. Figure A.11. Disk Drive Being Compressed In Figure A.11, "Disk Drive Being Compressed" , 1 represents before and 2 represents after. This step is crucial. Without it, the location of your data could prevent the partition from being resized to the extent desired. Note also that, for one reason or another, some data cannot be moved. If this is the case (and it severely restricts the size of your new partition(s)), you may be forced to destructively repartition your disk. A.1.5.3.2. Resize the existing partition Figure A.12, "Disk Drive with Partition Resized" , shows the actual resizing process. While the actual result of the resizing operation varies depending on the software used, in most cases the newly freed space is used to create an unformatted partition of the same type as the original partition. Figure A.12. Disk Drive with Partition Resized In Figure A.12, "Disk Drive with Partition Resized" , 1 represents before and 2 represents after. It is important to understand what the resizing software you use does with the newly freed space, so that you can take the appropriate steps. In the case we have illustrated, it would be best to delete the new DOS partition and create the appropriate Linux partition(s). A.1.5.3.3. Create new partition(s) As the step implied, it may or may not be necessary to create new partitions. However, unless your resizing software is Linux-aware, it is likely that you must delete the partition that was created during the resizing process. Figure A.13, "Disk Drive with Final Partition Configuration" , shows this being done. Figure A.13. Disk Drive with Final Partition Configuration In Figure A.13, "Disk Drive with Final Partition Configuration" , 1 represents before and 2 represents after. Note The following information is specific to x86-based computers only. As a convenience to our customers, we provide the parted utility. This is a freely available program that can resize partitions. If you decide to repartition your hard drive with parted , it is important that you be familiar with disk storage and that you perform a backup of your computer data. You should make two copies of all the important data on your computer. These copies should be to removable media (such as tape, CD-ROM, or diskettes), and you should make sure they are readable before proceeding. Should you decide to use parted , be aware that after parted runs you are left with two partitions: the one you resized, and the one parted created out of the newly freed space. If your goal is to use that space to install Red Hat Enterprise Linux, you should delete the newly created partition, either by using the partitioning utility under your current operating system or while setting up partitions during installation. A.1.6. Partition Naming Scheme Linux refers to disk partitions using a combination of letters and numbers which may be confusing, particularly if you are used to the "C drive" way of referring to hard disks and their partitions. In the DOS/Windows world, partitions are named using the following method: Each partition's type is checked to determine if it can be read by DOS/Windows. If the partition's type is compatible, it is assigned a "drive letter." The drive letters start with a "C" and move on to the following letters, depending on the number of partitions to be labeled. The drive letter can then be used to refer to that partition as well as the file system contained on that partition. Red Hat Enterprise Linux uses a naming scheme that is more flexible and conveys more information than the approach used by other operating systems. The naming scheme is file-based, with file names in the form of /dev/ xxyN . Here is how to decipher the partition naming scheme: /dev/ This is the name of the directory in which all device files reside. Since partitions reside on hard disks, and hard disks are devices, the files representing all possible partitions reside in /dev/ . xx The first two letters of the partition name indicate the type of device on which the partition resides, usually either hd (for IDE disks) or sd (for SCSI disks). y This letter indicates which device the partition is on. For example, /dev/hda (the first IDE hard disk) or /dev/sdb (the second SCSI disk). N The final number denotes the partition. The first four (primary or extended) partitions are numbered 1 through 4 . Logical partitions start at 5 . So, for example, /dev/hda3 is the third primary or extended partition on the first IDE hard disk, and /dev/sdb6 is the second logical partition on the second SCSI hard disk. Note There is no part of this naming convention that is based on partition type; unlike DOS/Windows, all partitions can be identified under Red Hat Enterprise Linux. Of course, this does not mean that Red Hat Enterprise Linux can access data on every type of partition, but in many cases it is possible to access data on a partition dedicated to another operating system. Keep this information in mind; it makes things easier to understand when you are setting up the partitions Red Hat Enterprise Linux requires. A.1.7. Disk Partitions and Other Operating Systems If your Red Hat Enterprise Linux partitions are sharing a hard disk with partitions used by other operating systems, most of the time you will have no problems. However, there are certain combinations of Linux and other operating systems that require extra care. A.1.8. Disk Partitions and Mount Points One area that many people new to Linux find confusing is the matter of how partitions are used and accessed by the Linux operating system. In DOS/Windows, it is relatively simple: Each partition gets a "drive letter." You then use the correct drive letter to refer to files and directories on its corresponding partition. This is entirely different from how Linux deals with partitions and, for that matter, with disk storage in general. The main difference is that each partition is used to form part of the storage necessary to support a single set of files and directories. This is done by associating a partition with a directory through a process known as mounting . Mounting a partition makes its storage available starting at the specified directory (known as a mount point ). For example, if partition /dev/hda5 is mounted on /usr/ , that would mean that all files and directories under /usr/ physically reside on /dev/hda5 . So the file /usr/share/doc/FAQ/txt/Linux-FAQ would be stored on /dev/hda5 , while the file /etc/gdm/custom.conf would not. Continuing our example, it is also possible that one or more directories below /usr/ would be mount points for other partitions. For instance, a partition (say, /dev/hda7 ) could be mounted on /usr/local/ , meaning that /usr/local/man/whatis would then reside on /dev/hda7 rather than /dev/hda5 . A.1.9. How Many Partitions? At this point in the process of preparing to install Red Hat Enterprise Linux, you must give some consideration to the number and size of the partitions to be used by your new operating system. The question of "how many partitions" continues to spark debate within the Linux community and, without any end to the debate in sight, it is safe to say that there are probably as many partition layouts as there are people debating the issue. Keeping this in mind, we recommend that, unless you have a reason for doing otherwise, you should at least create the following partitions: swap , /boot/ , and / (root). For more information, refer to Section 9.15.5, "Recommended Partitioning Scheme" . [15] Blocks really are consistently sized, unlike our illustrations. Keep in mind, also, that an average disk drive contains thousands of blocks. But for the purposes of this discussion, please ignore these minor discrepancies.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-partitions-x86
Troubleshooting Red Hat Discovery
Troubleshooting Red Hat Discovery Subscription Central 1-latest Troubleshooting Red Hat Discovery Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/troubleshooting_red_hat_discovery/index
Chapter 31. Real-Time Kernel
Chapter 31. Real-Time Kernel A race condition that prevented tasks from being scheduled properly has been fixed Previously, preemption was enabled too early after a context switch. If a task was migrated to another CPU after a context switch, a mismatch between CPU and runqueue during load balancing sometimes occurred. Consequently, a runnable task on an idle CPU failed to run, and the operating system became unresponsive. This update disables preemption in the schedule_tail() function. As a result, CPU migration during post-schedule processing no longer occurs, which prevents the above mismatch. The operating system no longer hangs due to this bug. (BZ# 1608672 , BZ#1541534)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/bug_fixes_real-time_kernel
Chapter 11. Provisioning concepts
Chapter 11. Provisioning concepts An important feature of Red Hat Satellite is unattended provisioning of hosts. To achieve this, Red Hat Satellite uses DNS and DHCP infrastructures, PXE booting, TFTP, and Kickstart. Use this chapter to understand the working principle of these concepts. 11.1. PXE booting Preboot execution environment (PXE) provides the ability to boot a system over a network. Instead of using local hard drives or a CD-ROM, PXE uses DHCP to provide host with standard information about the network, to discover a TFTP server, and to download a boot image. For more information about setting up a PXE server see the Red Hat Knowledgebase solution How to set-up/configure a PXE Server . 11.1.1. PXE sequence The host boots the PXE image if no other bootable image is found. A NIC of the host sends a broadcast request to the DHCP server. The DHCP server receives the request and sends standard information about the network: IP address, subnet mask, gateway, DNS, the location of a TFTP server, and a boot image. The host obtains the boot loader image/pxelinux.0 and the configuration file pxelinux.cfg/00:MA:CA:AD:D from the TFTP server. The host configuration specifies the location of a kernel image, initrd and Kickstart. The host downloads the files and installs the image. For an example of using PXE Booting by Satellite Server, see Provisioning Workflow in Provisioning hosts . 11.1.2. PXE booting requirements To provision machines using PXE booting, ensure that you meet the following requirements: Network requirements Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server. Client requirements Ensure that all the network-based firewalls are configured to allow clients on the subnet to access the Capsule. For more information, see Section 4.7, "Capsule networking" . Ensure that your client has access to the DHCP and TFTP servers. Satellite requirements Ensure that both Satellite Server and Capsule have DNS configured and are able to resolve provisioned host names. Ensure that the UDP ports 67 and 68 are accessible by the client to enable the client to receive a DHCP offer with the boot options. Ensure that the UDP port 69 is accessible by the client so that the client can access the TFTP server on the Capsule. Ensure that the TCP port 80 is accessible by the client to allow the client to download files and Kickstart templates from the Capsule. Ensure that the host provisioning interface subnet has a DHCP Capsule set. Ensure that the host provisioning interface subnet has a TFTP Capsule set. Ensure that the host provisioning interface subnet has a Templates Capsule set. Ensure that DHCP with the correct subnet is enabled using the Satellite installer. Enable TFTP using the Satellite installer. 11.2. HTTP booting You can use HTTP booting to boot systems over a network using HTTP. 11.2.1. HTTP booting requirements with managed DHCP To provision machines through HTTP booting ensure that you meet the following requirements: Client requirements For HTTP booting to work, ensure that your environment has the following client-side configurations: All the network-based firewalls are configured to allow clients on the subnet to access the Capsule. For more information, see Section 4.7, "Capsule networking" . Your client has access to the DHCP and DNS servers. Your client has access to the HTTP UEFI Boot Capsule. Network requirements Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server. Satellite requirements Although TFTP protocol is not used for HTTP UEFI Booting, Satellite uses TFTP Capsule API to deploy bootloader configuration. For HTTP booting to work, ensure that Satellite has the following configurations: Both Satellite Server and Capsule have DNS configured and are able to resolve provisioned host names. The UDP ports 67 and 68 are accessible by the client so that the client can send and receive a DHCP request and offer. Ensure that the TCP port 8000 is open for the client to download the bootloader and Kickstart templates from the Capsule. The TCP port 9090 is open for the client to download the bootloader from the Capsule using the HTTPS protocol. The subnet that functions as the host's provisioning interface has a DHCP Capsule, an HTTP Boot Capsule, a TFTP Capsule, and a Templates Capsule The grub2-efi package is updated to the latest version. To update the grub2-efi package to the latest version and execute the installer to copy the recent bootloader from /boot into /var/lib/tftpboot directory, enter the following commands: 11.2.2. HTTP booting requirements with unmanaged DHCP To provision machines through HTTP booting without managed DHCP ensure that you meet the following requirements: Client requirements HTTP UEFI Boot URL must be set to one of: http://capsule.example.com:8000 https://capsule.example.com:9090 Ensure that your client has access to the DHCP and DNS servers. Ensure that your client has access to the HTTP UEFI Boot Capsule. Ensure that all the network-based firewalls are configured to allow clients on the subnet to access the Capsule. For more information, see Section 4.7, "Capsule networking" . Network requirements An unmanaged DHCP server available for clients. An unmanaged DNS server available for clients. In case DNS is not available, use IP address to configure clients. Satellite requirements Although TFTP protocol is not used for HTTP UEFI Booting, Satellite use TFTP Capsule API to deploy bootloader configuration. Ensure that both Satellite Server and Capsule have DNS configured and are able to resolve provisioned host names. Ensure that the UDP ports 67 and 68 are accessible by the client so that the client can send and receive a DHCP request and offer. Ensure that the TCP port 8000 is open for the client to download bootloader and Kickstart templates from the Capsule. Ensure that the TCP port 9090 is open for the client to download the bootloader from the Capsule through HTTPS. Ensure that the host provisioning interface subnet has an HTTP Boot Capsule set. Ensure that the host provisioning interface subnet has a TFTP Capsule set. Ensure that the host provisioning interface subnet has a Templates Capsule set. Update the grub2-efi package to the latest version and execute the installer to copy the recent bootloader from the /boot directory into the /var/lib/tftpboot directory:
[ "satellite-maintain packages update grub2-efi satellite-installer", "satellite-maintain packages update grub2-efi satellite-installer" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/chap-architecture_guide-provisioning_concepts
Chapter 8. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases
Chapter 8. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases To enhance the security of your operating system, use the UEFI Secure Boot feature for signature verification when booting a Red Hat Enterprise Linux Beta release on systems having UEFI Secure Boot enabled. 8.1. UEFI Secure Boot and RHEL Beta releases UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key. UEFI Secure Boot then verifies the signature using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific private key. UEFI Secure Boot attempts to verify the signature using the corresponding public key, but because the hardware does not recognize the Beta private key, Red Hat Enterprise Linux Beta release system fails to boot. Therefore, to use UEFI Secure Boot with a Beta release, add the Red Hat Beta public key to your system using the Machine Owner Key (MOK) facility. 8.2. Adding a Beta public key for UEFI Secure Boot This section contains information about how to add a Red Hat Enterprise Linux Beta public key for UEFI Secure Boot. Prerequisites The UEFI Secure Boot is disabled on the system. The Red Hat Enterprise Linux Beta release is installed, and Secure Boot is disabled even after system reboot. You are logged in to the system, and the tasks in the Initial Setup window are complete. Procedure Begin to enroll the Red Hat Beta public key in the system's Machine Owner Key (MOK) list: USD(uname -r) is replaced by the kernel version - for example, 4.18.0-80.el8.x86_64 . Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Enroll MOK . Select Continue . Select Yes and enter the password. The key is imported into the system's firmware. Select Reboot . Enable Secure Boot on the system. 8.3. Removing a Beta public key If you plan to remove the Red Hat Enterprise Linux Beta release, and install a Red Hat Enterprise Linux General Availability (GA) release, or a different operating system, then remove the Beta public key. The procedure describes how to remove a Beta public key. Procedure Begin to remove the Red Hat Beta public key from the system's Machine Owner Key (MOK) list: Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Reset MOK . Select Continue . Select Yes and enter the password that you had specified in step 2. The key is removed from the system's firmware. Select Reboot .
[ "mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer", "mokutil --reset" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/booting-a-beta-system-with-uefi-secure-boot_rhel-installer
23.14. Hypervisor Features
23.14. Hypervisor Features Hypervisors may allow certain CPU or machine features to be enabled ( state='on' ) or disabled ( state='off' ). ... <features> <pae/> <acpi/> <apic/> <hap/> <privnet/> <hyperv> <relaxed state='on'/> </hyperv> </features> ... Figure 23.24. Hypervisor features All features are listed within the <features> element, if a <state> is not specified it is disabled. The available features can be found by calling the capabilities XML, but a common set for fully virtualized domains are: Table 23.10. Hypervisor features elements State Description <pae> Physical address extension mode allows 32-bit guest virtual machines to address more than 4 GB of memory. <acpi> Useful for power management. For example, with KVM guest virtual machines it is required for graceful shutdown to work. <apic> Allows the use of programmable IRQ management. This element has an optional attribute eoi with values on and off , which sets the availability of EOI (End of Interrupt) for the guest virtual machine. <hap> Enables the use of hardware assisted paging if it is available in the hardware.
[ "<features> <pae/> <acpi/> <apic/> <hap/> <privnet/> <hyperv> <relaxed state='on'/> </hyperv> </features>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Hypervisor_features
Appendix A. Tests
Appendix A. Tests In this section we give more detailed information about each of the tests for hardware certification. Each test section uses the following format: What the test covers This section lists the types of hardware that this particular test is run on. RHEL version supported This section lists the versions of RHEL that the test is supported on. What the test does This section explains what the test scripts do. Remember, all the tests are python scripts and can be viewed in the directory /usr/lib/python2.7/site-packages/rhcert/suites/hwcert/tests if you want to know exactly what commands we are executing in the tests. Preparing for the test This section talks about the steps necessary to prepare for the test. For example, it talks about having a USB device on hand for the USB test and blank discs on hand for rewritable optical drive tests. Executing the test This section identifies whether the test is interactive or non-interactive and explains what command is necessary to run the test. You can choose either way to run the test: Follow Running the certification tests using CLI to run the test. Select the appropriate test name from the displayed list using the command: In case of hardware detection issues or other hardware-related problems during planning, follow Manually adding and running the tests . Run the rhcert-cli command by specifying the desired test name. Run Time This section explains how long a run of this test will take. Timing information for the supportable test is mentioned in each section as it is a required test for every run of the test suite. A.1. Core What the test covers The core test examines the system's CPUs and ensures that they are capable of functioning properly under load. What the test does The core test is actually composed of two separate routines. The first test is designed to detect clock jitter. Jitter is a condition that occurs when the system clocks are out of sync with each other. The system clocks are not the same as the CPU clock speed, which is just another way to refer to the speed at which the CPUs are operating. The jitter test uses the getimeofday() function to obtain the time as observed by each logical CPU and then analyzes the returned values. If all the CPU clocks are within .2 nanoseconds of each other, the test passes. The tolerances for the jitter test are very tight. In order to get good results it's important that the rhcert tests are the only loads running on a system at the time the test is executed. Any other compute loads that are present could interfere with the timing and cause the test to fail. The jitter test also checks to see which clock source the kernel is using. It will print a warning in the logs if an Intel processor is not using TSC, but this will not affect the PASS/FAIL status of the test. The second routine run in the core test is a CPU load test. It's the test provided by the required stress package. The stress program, which is available for use outside the rhcert suite if you are looking for a way to stress test a system, launches several simultaneous activities on the system and then monitors for any failures. Specifically it instructs each logical CPU to calculate square roots, it puts the system under memory pressure by using malloc() and free() routines to reserve and free memory respectively, and it forces writes to disk by calling sync() . These activities continue for 10 minutes, and if no failures occur within that time period, the test passes. Please see the stress manpage if you are interested in using it outside of hardware certification testing. Preparing for the test The only preparation for the core test is to install a CPU that meets the requirements that are stated in the Policy Guide. Executing the test The core test is non-interactive. Run the following command and then select the appropriate Core test name from the list that displays. Run time, bare-metal The core test itself takes about 12 minutes to run on a bare-metal system. The jitter portion of the test takes a minute or two and the stress portion runs for exactly 10 minutes. The required supportable test will add about a minute to the overall run time. Run time, full-virt guest The fv_core test takes slightly longer than the bare-metal version, about 14 minutes, to run in a KVM guest. The added time is due to guest startup/shutdown activities and the required supportable test that runs in the guest. The required supportable test on the bare-metal system will add about a minute to the overall run time. A.2. CPU scaling What the test covers The cpuscaling test examines a CPU's ability to increase and decrease its clock speed according to the compute demands placed on it. What the test does The test exercises the CPUs at varying frequencies using different scaling governors (the set of instructions that tell the CPU when to change to higher or lower clock speeds and how fast to do so) and measures the difference in the time that it takes to complete a standardized workload. The test is scheduled when the hardware detection routines find the following directories in /sys containing more than one cpu frequency: The cpuscaling test is planned once per package, rather than being listed once per logical CPU. When the test is run, it will determine topology via /sys/devices/system/cpu/cpu X /topology/physical_package_id , and run the test in parallel for all the logical CPUs in a particular package. The test runs the turbostat command first to gather the processor statistics. On supported architectures, turbostat checks if the advance statistics columns are visible in the turbostat output file, but returns a warning if the file does not contain the columns. The test then attempts to execute the cstate subtest and if it fails, executes pstate subtest. The test procedure for each CPU package is as follows: The test uses the values found in the sysfs filesystem to determine the maximum and minimum CPU frequencies. You can see these values for any system with this command: There will always be at least two frequencies displayed here, a maximum and a minimum, but some processors are capable of finer CPU speed control and will show more than two values in the file. Any additional CPU speeds between the max and min are not specifically used during the test, though they may be used as the CPU transitions between max and min frequencies. The test procedure is as follows: The test records the maximum and minimum processor speeds from the file /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies . The userspace governor is selected and maximum frequency is chosen. Maximum speed is confirmed by reading all processors' /sys/devices/system/cpu/cpu X /cpufreq/scaling_cur_freq value. If this value does not match the selected frequency, the test will report a failure. Every processor in the package is given the simultaneous task of calculating pi to 2x10^12 digits. The value for the pi calculation was chosen because it takes a meaningful amount of time to complete (about 30 seconds). The amount of time it took to calculate pi is recorded for each CPU, and an average is calculated for the package. The userspace governor is selected and the minimum speed is set. Minimum speed is confirmed by sysfs data, with a failure occurring if any CPU is not at the requested speed. The same pi calculation is performed by every processor in the package and the results recorded. The ondemand governor is chosen, which throttles the CPU between minimum and maximum speeds depending on workload. Minimum speed is confirmed by sysfs data, with a failure occurring if any CPU is not at the requested speed. The same pi calculation is performed by every processor in the package and the results recorded. The performance governor is chosen, which forces the CPU to maximum speed at all times. Maximum speed is confirmed by sysfs data, with a failure occurring if any CPU is not at the requested speed. The same pi calculation is performed by every processor processor and the results recorded. Now the analysis is performed on the three subsections. In steps one through eight we obtain the pi calculation times at maximum and minimum CPU speeds. The difference in the time it takes to calculate pi at the two speeds should be proportional to the difference in CPU speed. For example, if a hypothetical test system had a max frequency of 2GHz and a min of 1GHz and it took the system 30 seconds to run the pi calculation at max speed, we would expect the system to take 60 seconds at min speed to calculate pi. We know that for various reasons perfect results will not be obtained, so we allow for a 10% margin of error (faster or slower than expected) on the results. In our hypothetical example, this means that the minimum speed run could take between 54 and 66 seconds and still be considered a passing test (90% of 60 = 54 and 110% of 60 = 66). In steps nine through eleven, we test the pi calculation time using the ondemand governor. This confirms that the system can quickly increase the CPU speed to the maximum when work is being done. We take the calculation time obtained in step eleven and compare it to the maximum speed calculation time we obtained back in step five. A passing test has those two values differing by no more than 10%. In steps twelve through fourteen, we test the pi calculation using the performance governor. This confirms that the system can hold the CPU at maximum frequency at all times. We take the pi calculation time obtained in step 14 and compare it to the maximum speed calculation time we obtained back in step five. Again, a passing test has those two values differing by no more than 10%. An additional portion of the cpuscaling test runs when an Intel processor with the TurboBoost feature is detected by the presence of the ida CPU flag in /proc/cpuinfo . This test chooses one of the CPUs in each package, omitting CPU0 for housekeeping purposes, and measures the performance using the ondemand governor at maximum speed. It expects a result of at least 5% faster performance than the test, when all the cores in the package were being tested in parallel. Preparing for the test To prepare for the test, ensure that CPU frequency scaling is enabled in the BIOS and ensure that a CPU is installed that meets the requirements explained in the Policy Guide. Executing the test The cpuscaling test is non-interactive. Run the following command and then select the appropriate CPU scaling test name from the list that displays. Run time The cpuscaling test takes about 42 minutes for a 2013-era, single CPU, 6-core/12-thread 3.3GHz Intel-based workstation running Red Hat Enterprise Linux 6.4, AMD64 and Intel 64. Systems with higher core counts and more populated sockets will take longer. The required supportable test will add about a minute to the overall run time. A.3. Ethernet What the test covers The Ethernet test only appears when the speed of a network device is not recognized by the test suite. This may be due to an unplugged cable or some other fault is preventing the proper detection of the connection speed. Please exit the test suite, check your connection, and run the test suite again when the device is properly connected. If the problem persists, contact your Red Hat support representative for assistance. The example below shows a system with two gigabit Ethernet devices, eth0 and eth1. Device eth0 is properly connected, but eth1 is not plugged in. The output of the ethtool command shows the expected gigabit Ethernet speed of 1000Mb/s for eth0: But on eth1 the ethtool command shows an unknown speed, which would cause the Ethernet test to be planned. A.4. fv_core RHEL version supported The fv_core test is a wrapper that launches the FV guest and runs a core test on it. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported The first time you run any full-virtualization test, the test tool will need to obtain the FV guest files. The execution time of the test tool depends on the transfer speed of the FV guest files. For example, If FV guest files are located on the test server and you are using 1GbE or faster networking, it takes almost a minute or two to transfer approximately 300MB of guest files. If the files are retrieved from the CWE API, which occurs automatically when the guest files are not installed or found on the test server, the first runtime will depend on the transfer speed from the CWE API. When the guest files are available on the Host Under Test (HUT), they will be utilized for all the later runs of fv_* tests. Additional resources For more information about the test methodology and run times, see core . A.5. fv_memory The fv_memory test is a wrapper that launches the FV guest and runs a memory test on it. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported The first time you run any full-virtualization test, the test tool will need to obtain the FV guest files. The execution time of the test tool depends on the transfer speed of the FV guest files. For example, If FV guest files are located on the test server and you are using 1GbE or faster networking, it takes almost a minute or two to transfer approximately 300MB of guest files. If the files are retrieved from the CWE API, which occurs automatically when the guest files are not installed or found on the test server, the first runtime will depend on the transfer speed from the CWE API. When the guest files are available on the Host Under Test (HUT), they will be utilized for all the later runs of fv_* tests. Additional resources For more information about the test methodology and run times, see memory . A.6. kdump What the test covers The kdump test uses the kdump service to check that the system can capture a vmcore file after a crash, and that the captured file is valid. What the test does The test includes the following subtests: kdump with local : Using the kdump service, this subtest performs the following tasks: Crashes the host under test (HUT). Writes a vmcore file to the local /var/crash directory. Validates the vmcore file. kdump with NFS : Using the kdump service, this subtest performs the following tasks: Mounts the /var/rhcert/export filesystem on the HUT's /var/crash directory. This filesystem is shared over NFS from the test server. Crashes the HUT. Writes a vmcore file to the /var/crash directory. Validates the vmcore file. Preparing for the test Ensure that the HUT is connected to the test server before running the test. Ensure that the rhcertd process is running on the test server. The certification test suite prepares the NFS filesystem automatically. If the suite cannot set up the environment, the test fails. Executing the test Log in to the HUT. Run the kdump test: To use the rhcert-run command, perform the following steps: Run the rhcert-run command: # rhcert-run Select the kdump test. The test runs both subtests sequentially. To use the rhcert-cli command, choose whether to run both subtests sequentially, or specify a subtest: To run both subtests sequentially, use the following command: # rhcert-cli run --test=kdump --server=<test server's IP> To run the kdump with local subtest only, use the following command: # rhcert-cli run --test=kdump --device=local To run the kdump with NFS subtest only, use the following command: # rhcert-cli run --test=kdump --device=nfs --server=<test server's IP> Additionally, for the kdump with NFS test, execute the following command on the Test Server: # rhcertd start Wait for the HUT to restart after the crash. The kdump service shows several messages while it saves the vmcore file to the /var/crash directory. After the vmcore file is saved, the HUT restarts. Log in to the HUT after reboot, the rhcert suite will verify if the vmcore file exists, and if it is valid. If the file does not exist or is invalid, the test fails. If you are running the subtests sequentially, the kdump with NFS subtest starts after the validation of the vmcore file has completed. Run time The run time of the kdump test varies according to factors such as the amount of RAM in the HUT, the disc speed of the test server and the HUT, the network connection speed to the test server, and the time taken to reboot the HUT. For a 2013-era workstation with 8GB of RAM, a 7200 RPM 6Gb/s SATA drive, a gigabit Ethernet connection to the test server, and a 1.5 minute reboot time, a local kdump test can complete in about four minutes, including the reboot. The same 2013-era workstation can complete an NFS kdump test in about five minutes to a similarly equipped network test server. The supportable test will add about a minute to the overall run time. A.7. memory What the memory test covers The memory test is used to test system RAM. It does not test USB flash memory, SSD storage devices or any other type of RAM-based hardware. It tests main memory only. A memory per CPU core check has been added to the planning process to verify that the HUT meets the RHEL minimum requirement memory standards. It is a planning condition for several of the hardware certification tests, including the ones for memory, core, realtime, and all the full-virtualization tests. If the memory per CPU core check does not pass, the above-mentioned tests will not be planned automatically. However, these tests can be planned manually via CLI. RHEL version supported What the test does: The test uses the file /proc/meminfo to determine how much memory is installed in the system. Once it knows how much is installed, it checks to see if the system architecture is 32-bit or 64-bit. Then it determines if swap space is available or if there is no swap partition. The test runs either once or twice with slightly different settings depending on whether or not the system has a swap file: If swap is available, allocate more RAM to the memory test than is actually installed in the system. This forces the use of swap space during the run. Regardless of swap presence, allocate as much RAM as possible to the memory test while staying below the limit that would force out of memory (OOM) kills. This version of the test always runs. In both iterations of the memory test, malloc() is used to allocate RAM, the RAM is dirtied with a write of an arbitrary hex string (0xDEADBEEF), and a test is performed to ensure that 0xDEADBEEF is actually stored in RAM at the expected addresses. The test calls free() to release RAM when testing is complete. Multiple threads or multiple processes will be used to allocate the RAM depending on whether the process size is greater than or less than the amount of memory to be tested. Preparing for the test Install the correct amount of RAM in the system in accordance with the rules in the Policy Guide. Executing the test The memory test is non-interactive. Run the following command and then select the appropriate memory test name from the list that displays. Run time, bare-metal The memory test takes about 16 minutes to run on a 2013-era, single CPU, 6-core/12-thread 3.3GHz Intel-based workstation with 8GB of RAM running Red Hat Enterprise Linux, AMD64 and Intel 64. The test will take longer on systems with more RAM. The required supportable test will add about a minute to the overall run time. Run time, full-virt guest The fv_memory test takes slightly longer than the bare-metal version, about 18 minutes, to run in a guest. The added time is due to guest startup/shutdown activities and the required supportable test that runs in the guest. The required supportable test on the bare-metal system will add about a minute to the overall run time. The fv_memory test run times will not vary as widely from machine to machine as the bare-metal memory tests, as the amount of RAM assigned to our pre-built guest is always the same. There will be variations caused by the speed of the underlying real system, but the amount of RAM in use during the test won't change from machine to machine. Creating and Activating Swap for EC2 : Partners can perform the following steps to create and activate swap for EC2 A.8. network What the test covers The network test checks devices that transfer data over a TCP/IP network. The test can check multiple connection speeds and bandwidths of wired devices based on the corresponding test, as listed in the following table: Different tests under Network test Ethernet test Description 1GigEthernet The network test with added speed detection for 1 gigabit Ethernet connections. 10GigEthernet The network test with added speed detection for 10 gigabit Ethernet connections. 20GigEthernet The network test with added speed detection for 20 gigabit Ethernet connections. 25GigEthernet The network test with added speed detection for 25 gigabit Ethernet connections. 40GigEthernet The network test with added speed detection for 40 gigabit Ethernet connections. 50GigEthernet The network test with added speed detection for 50 gigabit Ethernet connections. 100GigEthernet The network test with added speed detection for 100 gigabit Ethernet connections. 200GigEthernet The network test with added speed detection for 200 gigabit Ethernet connections. Ethernet If the Ethernet test is listed in your local test plan, it indicates that the test suite did not recognize the speed of that device. Check the connection before attempting to test that particular device. What the test does The test runs the following subtests to gather information about all the network devices: The bounce test on the interface is conducted using nmcli conn up and nmcli conn down commands. If the root partition is not NFS or iSCSI mounted, the bounce test is performed on the interface. Additionally, all other interfaces that will not be tested are shut down to ensure that traffic is routed through the interface being tested. If the root partition is NFS or iSCSI mounted, the bounce test on the interface responsible for the iSCSI or NFS connection is skipped, and all other interfaces, except for the one handling the iSCSI or NFS connection, will be shut down. A test file gets created at location /dev/urandom , and its size is adjusted with the speed of your NIC. TCP and UDP testing - The test uses iperf tool to: Test TCP latency between the test server and host under test. The test checks if the system runs into any OS timeouts and fails if it does. Test the bandwidth between the test server and the host under test. For wired devices, it is recommended that the speed is close to the theoretical maximum. Test UDP latency between the test server and host under test. The test checks if the system runs into any OS timeouts and fails if it does. File transfer testing - The test uses SCP to transfer a file from the host under test to the remote system or test server and then transfers it back to the host under test to check if the transfer works properly. ICMP (ping) test - The script causes a ping flood at the default packet size to ensure nothing in the system fails (the system should not restart or reset or anything else that indicates the inability to withstand a ping flood). 5000 packets are sent and a 100% success rate is expected. The test retries 5 times for an acceptable success rate. Finally, the test brings all interfaces back to their original state (active or inactive) when the test is executed. Preparing for testing wired devices You can test as many network devices as you want in each test run. Before you begin: Ensure to connect each device at its native (maximum) speed, or else the test fails. Ensure that the test server is up and running. Ensure that each network device has an IP address assigned either statically or dynamically via DHCP. Ensure that multiple firewall ports are open, for the iperf tool to run TCP and UDP subtests. Note By default, ports 52001-52101 are open. If you want to change the default ports, update the iperf-port and total-iperf-ports values in the /etc/rhcert.xml configuration file. Example: <server listener-port="8009" iperf-port="52001" total-iperf-ports="100"> If the firewall ports are not open, the test prompts to open the firewall ports during the test run. Partitionable networking The test checks if any of the network devices support partitioning, by checking the data transfer at full speed and the partitioning function. Running the test based on the performance of NIC: If NIC runs at full speed while partitioned then, configure a partition with NIC running at its native speed and Perform the network test in that configuration. If NIC does not run at full speed while partitioned then, run the test twice - first time, run it without partitioning to see the full-speed operation, and the second time, run it with partitioning enabled to see the partitioning function. Note Red Hat recommends selecting either 1Gb/s or 10Gb/s for your partitioned configuration so that it conforms to the existing network speed tests. Executing the test The network test is non-interactive. Run the following command and then select the appropriate network test name from the list that displays. Table A.1. Manually adding and running the test Speed Type Command to manually add Ethernet Test Command to Manually run Ethernet Test 1GigEthernet 10GigEthernet 20GigEthernet 25GigEthernet 40GigEthernet 50GigEthernet 100GigEthernet 200GigEthernet 400GigEthernet Replace <device name> and <test server IP addr> with the appropriate value. Run time The network test takes about 2 minutes to test each PCIe-based, gigabit, wired Ethernet card, and the required Supportable test adds about a minute to the overall run time. Additional resources For more information about the remaining test functionality, see Ethernet test . A.9. NetworkManageableCheck What the test covers The NetworkManageableCheck test runs for all the network interfaces available in the system. RHEL version supported RHEL 8 RHEL 9 What the test does The test comprises two subtests that perform the following tasks: Check the BIOS device name to confirm that the interface follows the terminology set by the firmware. Note BIOS device name validation runs only on x86 systems. Check if the Network Manager manages the interface, for evaluating current network management status. Executing the test The NetworkManageableCheck test is mandatory. It is planned and executed with a self-check and supportable test to ensure thorough examination and validation of network interfaces. Run time The test takes around 1 minute to complete. However, the duration of the test varies depending on the specifics of the system and the number of interfaces. A.10. profiler The profiler test collects the performance metric from the Host Under Test and determines whether the metrics are collected from the software or the hardware Performance Monitoring Unit (PMU) supported by the RHEL Kernel. If the metrics are hardware-based, the test further determines if the PMU includes per core counters only or includes per package counters also. The profiler test is divided into three tests, profiler_hardware_core , profiler_hardware_uncore , and profiler_software . A.10.1. profiler_hardware_core What the test covers The profiler_hardware_core test collects performance metrics using hardware-based per core counters by checking the cycle events. The core events measure the functions of a processor core, for example, the L2 cache. RHEL version supported RHEL 8 RHEL 9 What the test does The test is planned if core hardware event counters are found and locate the cpu*cycles files in the /sys/devices directory by running the find /sys/devices/* -type f -name 'cpu*cycles' command. The test executes multiple commands to accumulate the sample of 'cycle' events, checks if the 'cpu cycle' event was detected, and checks if the samples were collected. Note This test is not intended to be exhaustive and, it does not test every possible core counter-event that a given processor may or may not have. Preparing for the test There are no special requirements to run this test. Executing the test The test is non-interactive. Run the following command and then select the appropriate profiler_hardware_core test name from the list that displays. Run time The test takes approximately 30 seconds. Any other mandatory or selected tests will add to the overall run time. A.10.2. profiler_hardware_uncore What the test covers The profiler_hardware_uncore test collects performance metrics using hardware-based package-wide counters. The uncore events measure the functions of a processor that are outside the core but are inside the package, for example, a memory controller. RHEL version supported RHEL 8 RHEL 9 What the test does The test is planned if uncore hardware event counters are found. The test passes if it finds any uncore events and collects statistics for any one event. The test fails if it finds uncore events but does not collect statistics as those events are not supported. The test executes multiple commands to collect the list of uncore events and the uncore events statistics. Note This test is not intended to be exhaustive and, it does not test every possible uncore counter-event that a given processor may or may not have. Preparing for the test There are no special requirements to run this test. Executing the test The test is non-interactive. Run the following command and then select the appropriate profiler_hardware_uncore test name from the list that displays. Run time The test takes approximately 30 seconds. Any other mandatory or selected tests will add to the overall run time. A.10.3. profiler_software What the test covers The profiler_software test collects performance metrics using software-based counters by checking the cpu_clock events. Software counters can be certified using this test. However, for customers with high-performance requirements, this test can be limiting. What the test does The test is planned if no core hardware event counters are found. The test executes multiple commands to accumulate the sample of cpu-clock events, checks if the cpu-clock event was detected, and checks if the samples were collected. Preparing for the test There are no special requirements to run this test. Executing the test The test is non-interactive. Run the following command and then select the appropriate profiler_software test name from the list that displays. Run time The test takes approximately 30 seconds. Any other mandatory or selected tests will add to the overall run time. A.11. PCIE_NVMe What the PCIe_NVMe test covers This test runs if the interface is NVMe and the device is connected through a PCIE connection. RHEL version supported RHEL 8 RHEL 9 What the PCIe_NVMe test does This test gets planned if the logical device host name string contains " nvme[0-9] " Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.12. M2_NVMe What the M2_NVMe test covers This test runs if the interface is NVMe and the device is connected through a M2 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the M2_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.13. U2_NVMe What the U2_NVMe test covers This test runs if the interface is NVMe and the device is connected through a U2 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the U2_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.14. U3_NVMe What the U3_NVMe test covers This test runs if the interface is NVMe and the device is connected through a U3 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the U3_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.15. E3_NVMe What the E3_NVMe test covers This test runs if the interface is NVMe and the device is connected through a E3 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the E3_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.16. STORAGE What the storage test covers There are many different kinds of persistent on-line storage devices available in systems today. The STORAGE test is designed to test anything that reports an ID_TYPE of "disk" in the udev database. This includes IDE, SCSI, SATA, SAS, and SSD drives, PCIe SSD block storage devices, as well as SD media, xD media, MemoryStick and MMC cards. The test plan script reads through the udev database and looks for storage devices that meet the above criteria. When it finds one, it records the device and its parent and compares it to the parents of any other recorded devices. It does this to ensure that only devices with unique parents are tested. If the parent has not been seen before, the device is added to the test plan. This speeds up testing as only one device per controller will be tested, as per the Policy Guide. What the test does The STORAGE test performs the following actions on all storage devices with a unique parent: The script looks through the partition table to locate a swap partition that is not on an LVM or software RAID device. If found, it will deactivate it with swapoff and use that space for the test. If no swap is present, the system can still test the drive if it is completely blank (no partitions). Note that the swap device must be active in order for this to work (the test reads /proc/swaps to find the swap partitions) and that the swap partition must not be inside any kind of software-based container (no LVM or software RAID, but hardware RAID would work as it would be invisible to the system). The tool creates a filesystem on the device, either in a swap partition on the blank drive. The filesystem is mounted and the fio or dt command is used to test the device. The fio or dt command is an I/O test program and is a generic test tool capable of testing, reading, and writing to devices. Multiple sets of test patterns verify the functionality of storage devices. After the mounted filesystem test, the filesystem is unmounted and a dt test is performed against the block device, ignoring the file system. The dt test uses the "direct" parameter to handle this. Preparing for the test You should install all the drives and storage controllers that are listed on the official test plan. In the case of multiple storage options, as many as can fit into the system at one time can be tested in a single run, or each storage device can be installed individually and have its own run of the storage test. You can decide on the order of testing and number of controllers present for each test. Each logical drive attached to the system must contain a swap partition in addition to any other partitions, or be totally blank. This is to provide the test with a location to create a filesystem and run the tests. The use of swap partitions will lead to a much quicker test, as devices left blank are tested in their entirety. They will almost always be significantly larger than a swap partition placed on the drive. Note If testing an SD media card, use the fastest card you can obtain. While a Class 4 SD card may take 8 hours or more to run the test, a Class 10 or UHS 1/2 card can complete the test run in 30 minutes or less. When it comes to choosing storage devices for the official test plan, the rule that the review team operates by is "one test per code path". What we mean by that is that we want to see a storage test run using every driver that a controller can use. The scenario of multiple drivers for the same controller usually involves RAID storage of some type. It's common for storage controllers to use one driver when in regular disk mode and another when in RAID mode. Some even use multiple drivers depending on the RAID mode that they are in. The review team will analyze all storage hardware to determine the drivers that need to be used in order to fulfill all the testing requirements. That's why you may see the same storage device listed more than once in the official test plan. Complete information on storage device testing is available in the Policy Guide. Executing the test The storage test is non-interactive. Run the following command and then select the appropriate STORAGE test name from the list that displays. Run time, bare-metal The storage test takes approximately 22 minutes on a 6Gb/s SATA hard drive installed in a 2013-era workstation system. The same test takes approximately 3 minutes on a 6Gb/s SATA solid-state drive installed in a 2013-era workstation system. The required supportable test will add about a minute to the overall run time. Additional resources For more information about appropriate swap file sizing, see What is the recommended swap size for Red Hat platforms? . A.17. supportable What the test covers The supportable test gathers basic information about the host under test (HUT). Red Hat uses this information to verify that the system complies with the certification requisites. What the test does The test has several subtests that perform the following tasks: Confirm that the /proc/sys/kernel/tainted file contains a zero ( 0 ), which indicates that the kernel is not tainted. Confirm that package verification with the rpm -V command shows that no files have been modified. Confirm that the rpm -qa kernel command shows that the buildhost of the kernel package is a Red Hat server. Record the boot parameters from the /proc/cmdline file. Confirm that the`rpm -V redhat-certification` command shows that no modifications have been made to any of the certification test suite files. Confirm that all the modules shown by the lsmod command show up in a listing of the kernel files with the rpm -ql kernel command. Confirm that all modules are on the Kernel Application Binary Interface (kABI) stablelist . Confirm that the module vendor and buildhost are appropriate Red Hat entries. Confirm that the kernel is the GA kernel of the Red Hat minor release. The subtest tries to verify the kernel with data from the redhat-certification package. If the kernel is not present, the subtest attempts to verify the kernel by using the Internet connection. To verify the kernel by using the Internet connection, you must either configure the HUT's routing and DNS resolution to access the Internet or set the ftp_proxy=http://proxy.domain:80 environment variable. Check for any known hardware vulnerabilities reported by the kernel. The subtest reads the files in the /sys/devices/system/cpu/vulnerabilities/ directory and exits with a warning if the files contain the word "Vulnerable". Confirm if the system has any offline CPUs by checking the output of the lscpu command. Confirm if Simultaneous Multithreading (SMT) is available, enabled, and active in the system. Check if there is unmaintained hardware or drivers in systems running RHEL 8 or later. Unmaintained hardware and drivers are no longer tested or updated on a routine basis. Red Hat may fix serious issues, including security issues, but you cannot expect updates on any planned cadence. Replace or remove unmaintained hardware or drivers as soon as possible. Check if there is deprecated hardware or drivers in systems running RHEL 8 or later. Deprecated hardware and drivers are still tested and maintained, but they are planned to become unmaintained and eventually disabled in a future release. Replace or remove deprecated devices or hardware as soon as possible. Check if there is disabled hardware in systems running RHEL 8 or later. RHEL cannot use disabled hardware. Replace or remove the disabled hardware from your system before running the test again. Run the following checks on the software RPM packages: Check the RPM build host information to isolate non-Red Hat packages. The test will ask you to explain the reasons for including the non-Red Hat packages. Red Hat will review the reasons and approve or reject each package individually. Check that the installed RPM packages are from the Red Hat products available in the offering and have not been modified. Red Hat reviews verification failures in the rpm_verification_report.log file. You will need to reinstall the failed packages and rerun the test. Check the presence of both Red Hat and non-Red Hat firmware files in the system. It lists the non-Red Hat files, if present, and exits with REVIEW status. Check the page size of systems by getconf PAGESIZE command. After performing these tasks, the test gathers a sosreport and the output of the dmidecode command. Executing the test The rhcert tool runs the supportable test automatically as part of every run of the test suite. The supportable test runs before any other test. The output of the supportable test is required as part of the test suite logs. Red Hat will reject test logs that do not contain the output of the supportable test. Use the following command to run the test manually, if required: USD rhcert-cli run --test supportable Run time The supportable test takes around 1 minute on a 2013-era, single CPU, 3.3GHz, 6-core or 12-thread Intel workstation with 8 GB of RAM running Red Hat Enterprise Linux 6.4, AMD64, and Intel 64 that was installed using the Kickstart files in this guide. The time will vary depending on the speed of the machine and the number of RPM files that are installed. A.18. VIDEO What the test covers For RHEL 8, the VIDEO test checks for all removable or integrated video hardware on the motherboard. Devices are selected for testing by their PCI class ID. Specifically, the test checks for a device with a PCI class as Display Controller in the udev command output. For RHEL 9, the VIDEO test remains the same. However, for framebuffer graphic solutions, the test is planned after it identifies if the display kernel driver is in use as a framebuffer and if direct rendering is not supported using the glxinfo command. What the test does The test runs multiple subtests: Check Connections - Logs the xrandr command output. This subtest is optional, and its failure does not affect the overall test result. Set Configuration - Checks the necessary configuration prerequisites like setting the display depth, flags, and configurations for the subtest. The X Server Test - Starts another display server using the new configuration file and runs the glxgears , a lightweight MESA OpenGL demonstration program to check the performance. Log Module and Drivers - Runs xdpyinfo to determine the screen resolution and color depth. Along with that, the configuration file created at the start of the test should allow the system to run at the maximum resolution capability. Finally, the test uses grep to search through the /var/log/Xorg.0.log logfile to determine in-use modules and drivers. Preparing for the test Ensure that the monitor and video card in the system can run at a resolution of 1024x768 with a color depth of 24 bits per pixel (bpp). Higher resolutions or color depths are also acceptable. Check the xrandr command output for 1024x768 at 24 bpp or higher to confirm. If you do not see all the resolutions that the card or monitor combination can generate, ensure to remove any KVM switches between the monitor and video card. Executing the test The test is non-interactive. Run the following command and then select the appropriate VIDEO test name from the list that displays. First, the test system screen will go blank, and then a series of test patterns from the x11perf test program will appear. When the test finishes, it will return to the desktop or the virtual terminal screen. Run time The test takes about 1 minute to complete. Any other mandatory or selected tests will add to the overall run time. A.19. VIDEO_DRM What the test covers The VIDEO_DRM test verifies the graphics controller, which utilizes a native DRM kernel driver with basic graphics support. The test will plan if: The display driver in use is identified as a kernel mode-setting driver. The display driver is not a framebuffer. The direct rendering is not supported as identified by the glxinfo command, and the OpenGL renderer string is llvmpipe . RHEL version supported RHEL 9 What the test does The test verifies the functionality of the graphics controller similar to the VIDEO . Preparing for the test Ensure that the monitor and video card in the system can run at a resolution of 1024x768 with a color depth of 24 bits per pixel (bpp). Higher resolutions or color depths are also acceptable. Check the xrandr command output for 1024x768 at 24 bpp or higher to confirm. If you do not see all the resolutions that the card or monitor combination can generate, ensure to remove any KVM switches between the monitor and video card. Executing the test The test is non-interactive. Run the following command and then select the appropriate VIDEO_DRM test name from the list that displays. First, the test system screen will go blank, and then a series of test patterns from the x11perf test program will appear. When the test finishes, it will return to the desktop or the virtual terminal screen. Run time The test takes about 1 minute to complete. Any other mandatory or selected tests will add to the overall run time. A.20. VIDEO_DRM_3D What the test covers The VIDEO_DRM_3D test verifies the graphics controller, which utilizes a native DRM kernel driver with accelerated graphics support. The test will plan if: The display driver in use is identified as a kernel mode-setting driver. The display driver is not a framebuffer. The direct rendering is supported as identified by the glxinfo command, and the OpenGL renderer string is not llvmpipe . The test uses Prime GPU Offloading technology to execute all the video test subtests. RHEL version supported RHEL 9 What the test does The test verifies the functionality of the graphics controller similar to the VIDEO test. In addition, the test runs the following subtests: Vulkaninfo test - Logs the vulkaninfo command output to collect the Vulkan information such as device properties of identified GPUs, Vulkan extensions supported by each GPU, recognized layers, supported image formats, and format properties. Glmark2 benchmarking test - Runs the glmark2 command to generate the score based on the OpenGL 2.0 & ES 2.0 benchmark set of tests and confirms the 3D capabilities. The subtest executes the utility two times with a different set of parameters, first with the Hardware renderer and later with the Software renderer. If the Hardware renderer command-run results in a better score than software, the test passes successfully, confirming the display controller has better 3D capabilities, otherwise fails. Preparing for the test Ensure that the monitor and video card in the system can run at a resolution of 1024x768 with a color depth of 24 bits per pixel (bpp). Higher resolutions or color depths are also acceptable. Check the xrandr command output for 1024x768 at 24 bpp or higher to confirm. If you do not see all the resolutions that the card or monitor combination can generate, ensure to remove any KVM switches between the monitor and video card. Executing the test The test is non-interactive. Run the following command and then select the appropriate VIDEO_DRM_3D test name from the list that displays. First, the test system screen will go blank, and then a series of test patterns from the x11perf test program will appear. When the test finishes, it will return to the desktop or the virtual terminal screen. Run time The test takes about 1 minute to complete. Any other mandatory or selected tests will add to the overall run time. A.21. Manually adding and running the tests On rare occasions, tests may fail to plan due to problems with hardware detection or other issues with the hardware, OS, or test scripts. If this happens you should get in touch with your Red Hat support contact for further assistance. They will likely ask you to open a support ticket for the issue, and then explain how to manually add a test to your local test plan using the rhcert-cli command on the HUT. Any modifications you make to the local test plan will be sent to the test server, so you can continue to use the web interface on the test server to run your tests. The command is run as follows: The options for the rhcert-cli command used here are: plan - Modify the test plan --add - Add an item to the test plan --test=<testname> - The test to be added. The test names are as follows: hwcert/kdump hwcert/network/Ethernet/100MegEthernet hwcert/network/Ethernet/1GigEthernet hwcert/network/Ethernet/10GigEthernet hwcert/network/Ethernet/40GigEthernet hwcert/network/NetworkManageableCheck hwcert/memory hwcert/core hwcert/cpuscaling hwcert/fvtest/fv_core hwcert/fvtest/fv_memory hwcert/profiler hwcert/profiler/profiler_hardware_core hwcert/profiler/profiler_hardware_uncore hwcert/profiler/profiler_software hwcert/storage hwcert/video hwcert/video/video_drm hwcert/video/video_drm_3d hwcert/supportable hwcert/storage/U2_NVME hwcert/storage/U3_NVME hwcert/storage/M2_NVME hwcert/storage/E3_NVME hwcert/storage/PCIE_NVME The other options are only needed if a device must be specified, like in the network and storage tests that need to be told which device to run on. There are various places you would need to look to determine the device name or UDI that would be used here. Support can help determine the proper name or UDI. Once found, you would use one of the following two options to specify the device: --device=<devicename> - The device that should be tested, identified by a device name such as "enp0s25" or "host0". --udi=<UDI> - The unique device ID of the device to be tested, identified by a UDI string. Run the rhcert-cli command by specifying the test name: for example: You can specify --device to run the specific device: for example: Note It is advisable to use rhcert-cli or rhcert-run independently and save the results. Mixing the use of both rhcert-cli and rhcert-run and saving the results together may result in the inability to process the results correctly.
[ "rhcert-run", "rhcert-cli run --test=<test name>", "rhcert-run", "/sys/devices/system/cpu/cpu X /cpufreq", "cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies", "rhcert-run", "ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 2 Transceiver: internal Auto-negotiation: on MDI-X: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes", "ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: Unknown! Duplex: Unknown! (255) Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: no", "rhcert-run", "rhcert-cli run --test=kdump --server=<test server's IP>", "rhcert-cli run --test=kdump --device=local", "rhcert-cli run --test=kdump --device=nfs --server=<test server's IP>", "rhcertd start", "rhcert-run", "sudo dd if=/dev/zero of=/swapfile bs=1M count=8000 chmod 600 /swapfile mkswap /swapfile swapon /swapfile swapon -s edit file /etc/fstab and add the following line: /swapfile swap swap defaults 0 0 write file and quit/exit", "rhcert-run", "rhcert-cli plan --add --test 1GigEthernet --device <device name>", "rhcert-cli run --test 1GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 10GigEthernet --device <device name>", "rhcert-cli run --test 10GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 20GigEthernet --device <device name>", "rhcert-cli run --test 20GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 25GigEthernet --device <device name>", "rhcert-cli run --test 25GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 40GigEthernet --device <device name>", "rhcert-cli run --test 40GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 50GigEthernet --device <device name>", "rhcert-cli run --test 50GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 100GigEthernet --device <device name>", "rhcert-cli run --test 100GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 200GigEthernet --device <device name>", "rhcert-cli run --test 200GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 400GigEthernet --device <device name>", "rhcert-cli run --test 400GigEthernet --server <test server IP addr>", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli plan --add --test M2_NVMe --device nvme0", "rhcert-cli plan --add --test U2_NVMe --device nvme0", "rhcert-cli plan --add --test U3_NVMe --device nvme0", "rhcert-cli plan --add --test E3_NVMe --device nvme0", "rhcert-run", "rhcert-cli run --test supportable", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli plan --add --test=<testname> --device=<devicename> --udi-<udi>", "rhcert-cli run --test=<test_name>", "rhcert-cli run --test=audio", "rhcert-cli run --test=<test name> --device=<device name>", "rhcert-cli run --test=kdump --device=nfs" ]
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_workflow/assembly-Appendix_cloud-instance-wf-certification-request-run-tests
Assessing and Monitoring Security Vulnerabilities on RHEL Systems
Assessing and Monitoring Security Vulnerabilities on RHEL Systems Red Hat Insights 1-latest Understanding your Environmental Exposure to Potential Security Threats Red Hat Customer Content Services
[ "USE [master] GO CREATE LOGIN [assessmentLogin] with PASSWORD= N'<*PASSWORD*>' ALTER SERVER ROLE [sysadmin] ADD MEMBER [assessmentLogin] GO", "echo \"assessmentLogin\" > /var/opt/mssql/secrets/assessment echo \"<*PASSWORD*>\" >> /var/opt/mssql/secrets/assessment", "chmod 0600 /var/opt/mssql/secrets/assessment chown mssql:mssql /var/opt/mssql/secrets/assessment", "yum -y install powershell", "su mssql -c \"/usr/bin/pwsh -Command Install-Module SqlServer\"", "/bin/curl -LJ0 -o /opt/mssql/bin/runassessment.ps1 https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/manage/sql-assessment-api/RHEL/runassessment.ps1 chown mssql:mssql /opt/mssql/bin/runassessment.ps1 chmod 0700 /opt/mssql/bin/runassessment.ps1", "mkdir /var/opt/mssql/log/assessments/ chown mssql:mssql /var/opt/mssql/log/assessments/ chmod 0700 /var/opt/mssql/log/assessments/", "su mssql -c \"pwsh -File /opt/mssql/bin/runassessment.ps1\"", "insights-client", "cp mssql-runassessment.service /etc/systemd/system/ cp mssql-runassessment.timer /etc/systemd/system/ chmod 644 /etc/systemd/system/", "systemctl enable --now mssql-runassessment.timer", "insights-client --group=<name-you-choose>", "tags --- group: eastern-sap name: Jane Example contact: [email protected] Zone: eastern time zone Location: - gray_rack - basement Application: SAP", "tags --- group: eastern-sap location: Boston description: - RHEL8 - SAP key 4: value", "insights-client", "vi /etc/insights-client/tags.yaml", "cat /etc/insights-client/tags.yaml group: redhat location: Brisbane/Australia description: - RHEL8 - SAP security: strict network_performance: latency", "insights-client" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html-single/assessing_and_monitoring_security_vulnerabilities_on_rhel_systems/index
Chapter 30. Glossary
Chapter 30. Glossary Ad Hoc Ad hoc refers to using Ansible to perform a quick command, using /usr/bin/ansible, rather than the orchestration language, which is /usr/bin/ansible-playbook . An example of an ad hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad hoc can be accomplished by writing a Playbook. Playbooks can also glue lots of other operations together. Callback Plugin Refers to user-written code that can intercept results from Ansible and act on them. Some examples in the GitHub project perform custom logging, send email, or play sound effects. Control Groups Also known as ' cgroups ', a control group is a feature in the Linux kernel that enables resources to be grouped and allocated to run processes. In addition to assigning resources to processes, cgroups can also report use of resources by all processes running inside of the cgroup. Check Mode Refers to running Ansible with the --check option, which does not make any changes on the remote systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called "dry run" modes in other systems. However, this does not take into account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use Check mode to get an idea of what might happen, but it is not a substitute for a good staging environment. Container Groups Container Groups are a type of Instance Group that specify a configuration for provisioning a pod in a Kubernetes or OpenShift cluster where a job is run. These pods are provisioned on-demand and exist only for the duration of the playbook run. Credentials Authentication details that can be used by automation controller to launch jobs against machines, to synchronize with inventory sources, and to import project content from a version control system. For more information, see [Credentials]. Credential Plugin Python code that contains definitions for an external credential type, its metadata fields, and the code needed for interacting with a secret management system. Distributed Job A job that consists of a job template, an inventory, and slice size. When executed, a distributed job slices each inventory into a number of "slice size" chunks, which are then used to run smaller job slices. External Credential Type A managed credential type used for authenticating with a secret management system. Facts Facts are things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered when running plays by executing the internal setup module on the remote nodes. You never have to call the setup module explicitly: it just runs. It can be disabled to save time if it is not required. For the convenience of users who are switching from other configuration management systems, the fact module also pulls in facts from the ohai and facter tools if they are installed, which are fact libraries from Chef and Puppet, respectively. Forks Ansible and automation controller communicate with remote nodes in parallel. The level of parallelism can be set in several ways during the creation or editing of a Job Template, by passing --forks , or by editing the default in a configuration file. The default is a very conservative five forks, though if you have a lot of RAM, you can set this to a higher value, such as 50, for increased parallelism. Group A set of hosts in Ansible that can be addressed as a set, of which many can exist within a single Inventory. Group Vars The group_vars/ files are files that are stored in a directory with an inventory file, with an optional filename named after each group. This is a convenient place to put variables that are provided to a given group, especially complex data structures, so that these variables do not have to be embedded in the inventory file or playbook. Handlers Handlers are like regular tasks in an Ansible playbook (see Tasks ), but are only run if the Task contains a "notify" directive and also indicates that it changed something. For example, if a configuration file is changed then the task referencing the configuration file templating operation might notify a service restart handler. This means services can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most common use. Host A system managed by automation controller, which may include a physical, virtual, or cloud-based server, or other device (typically an operating system instance). Hosts are contained in an Inventory. Sometimes referred to as a "node". Host Specifier Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems. This "hosts:" directive in each play is often called the hosts specifier. It can select one system, many systems, one or more groups, or hosts that are in one group and explicitly not in another. Instance Group A group that contains instances for use in a clustered environment. An instance group provides the ability to group instances based on policy. Inventory A collection of hosts against which Jobs can be launched. Inventory Script A program that looks up hosts, group membership for hosts, and variable information from an external resource, whether that be a SQL database, a CMDB solution, or LDAP. This concept was adapted from Puppet (where it is called an "External Nodes Classifier") and works in a similar way. Inventory Source Information about a cloud or other script to be merged into the current inventory group, resulting in the automatic population of Groups, Hosts, and variables about those groups and hosts. Job One of many background tasks launched by automation controller, this is usually the instantiation of a Job Template, such as the launch of an Ansible playbook. Other types of jobs include inventory imports, project synchronizations from source control, or administrative cleanup actions. Job Detail The history of running a particular job, including its output and success/failure status. Job Slice See Distributed Job . Job Template The combination of an Ansible playbook and the set of parameters required to launch it. For more information, see Job templates . JSON JSON is a text-based format for representing structured data based on JavaScript object syntax. Ansible and automation controller use JSON for return data from remote modules. This enables modules to be written in any language, not just Python. Mesh Describes a network comprising of nodes. Communication between nodes is established at the transport layer by protocols such as TCP, UDP or Unix sockets. See also, Node . Metadata Information for locating a secret in the external system once authenticated. The user provides this information when linking an external credential to a target credential field. Node A node corresponds to entries in the instance database model, or the /api/v2/instances/ endpoint, and is a machine participating in the cluster or mesh. The unified jobs API reports controller_node and execution_node fields. The execution node is where the job runs, and the controller node interfaces between the job and server functions. Node Type Description Control Nodes that run persistent services, and delegate jobs to hybrid and execution nodes. Hybrid Nodes that run persistent services and execute jobs. Hop Used for relaying across the mesh only. Execution Nodes that run jobs delivered from control nodes (jobs submitted from the user's Ansible automation) Notification Template An instance of a notification type (Email, Slack, Webhook, etc.) with a name, description, and a defined configuration. Notification A Notification, such as Email, Slack or a Webhook, has a name, description and configuration defined in a Notification template. For example, when a job fails, a notification is sent using the configuration defined by the notification template. Notify The act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it is still only run once. Handlers are run in the order they are listed, not in the order that they are notified. Organization A logical collection of Users, Teams, Projects, and Inventories. Organization is the highest level in the object hierarchy. Organization Administrator An user with the rights to modify the Organization's membership and settings, including making new users and projects within that organization. An organization administrator can also grant permissions to other users within the organization. Permissions The set of privileges assigned to Users and Teams that provide the ability to read, modify, and administer Projects, Inventories, and other objects. Plays A play is minimally a mapping between a set of hosts selected by a host specifier (usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that those systems perform. A playbook is a list of plays. There can be one or many plays in a playbook. Playbook An Ansible playbook. For more information, see Ansible playbooks . Policy Policies dictate how instance groups behave and how jobs are executed. Project A logical collection of Ansible playbooks, represented in automation controller. Roles Roles are units of organization in Ansible and automation controller. Assigning a role to a group of hosts (or a set of groups, or host patterns, etc.) implies that they implement a specific behavior. A role can include applying variable values, tasks, and handlers, or a combination of these things. Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among playbooks, or with other users. Secret Management System A server or service for securely storing and controlling access to tokens, passwords, certificates, encryption keys, and other sensitive data. Schedule The calendar of dates and times for which a job should run automatically. Sliced Job See Distributed Job . Source Credential An external credential that is linked to the field of a target credential. Sudo Ansible does not require root logins and, since it is daemonless, does not require root level daemons (which can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a sudo command, and can work with both password-less and password-based sudo. Some operations that do not normally work with sudo (such as scp file transfer) can be achieved with Ansible's copy , template , and fetch modules while running in sudo mode. Superuser An administrator of the server who has permission to edit any object in the system, whether or not it is associated with any organization. Superusers can create organizations and other superusers. Survey Questions asked by a job template at job launch time, configurable on the job template. Target Credential A non-external credential with an input field that is linked to an external credential. Team A sub-division of an Organization with associated Users, Projects, Credentials, and Permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across Organizations. User An operator with associated permissions and credentials. Webhook Webhooks enable communication and information sharing between applications. They are used to respond to commits pushed to SCMs and launch job templates or workflow templates. Workflow Job Template A set consisting of any combination of job templates, project syncs, and inventory syncs, linked together in order to execute them as a single unit. YAML A human-readable language that is often used for writing configuration files. Ansible and automation controller use YAML to define playbook configuration languages and also variable files. YAML has a minimum of syntax, is very clean, and is easy for people to skim. It is a good data format for configuration files and humans, but is also machine readable. YAML is popular in the dynamic language community and the format has libraries available for serialization in many languages. Examples include Python, Perl, or Ruby.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-glossary
14.4. User-Defined Functions Support
14.4. User-Defined Functions Support In Teiid Designer you can create, manage and use User-Defined Functions (UDFs). These functions allow you to perform simple or complex Java operations on your data during runtime. This is accomplished by deploying your custom UDF jars on your server and creating a scalar function representation of your function method to use in your view transformation. In the VDB Editor, you have the option of including your UDF jars as part of the VDB artifact. If included in the VDB, the jars will automatically be deployed to the server for you when the VDB is deployed. When a UDF model is added to a VDB, each scalar function is interrogated and its referenced UDF jar (if available) is added to the VDB as well as shown in the UDF Jars tab in the editor
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/user-defined_functions_support
Chapter 3. Red Hat build of OpenJDK 11.0.16.1 release notes
Chapter 3. Red Hat build of OpenJDK 11.0.16.1 release notes Review the following release notes for an overview of the changes from the Red Hat build of OpenJDK 11.0.16.1 patch release: Fixed issue with the C2 JIT compiler The Red Hat build of OpenJDK 11.0.16.1 release fixes a regression issue with the C2 Just-In-Time (JIT) compiler, which caused the HotSpot JVM to unexpectedly crash. See, JDK-8292396 (JDK Bug System) Advisories related to the Red Hat build of OpenJDK 11.0.16.1 release The following advisories have been issued about bug fixes and CVE fixes included in this release: RHBA-2022:6294-01 RHBA-2022:6349-01
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.16/openjdk-11-0-16-1-release-notes_openjdk